Jan 30 21:14:20 crc systemd[1]: Starting Kubernetes Kubelet... Jan 30 21:14:20 crc restorecon[4671]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:14:21 crc restorecon[4671]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 30 21:14:21 crc kubenswrapper[4751]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 21:14:21 crc kubenswrapper[4751]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 30 21:14:21 crc kubenswrapper[4751]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 21:14:21 crc kubenswrapper[4751]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 21:14:21 crc kubenswrapper[4751]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 21:14:21 crc kubenswrapper[4751]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.695681 4751 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.700757 4751 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.700926 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.701030 4751 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.701122 4751 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.701223 4751 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.701316 4751 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.701445 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.701554 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.701648 4751 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.701738 4751 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.701827 4751 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.701914 4751 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.702012 4751 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.702104 4751 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.702191 4751 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.702289 4751 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.702410 4751 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.702502 4751 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.702589 4751 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.702693 4751 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.702787 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.702875 4751 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.702962 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.703080 4751 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.703171 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.703259 4751 feature_gate.go:330] unrecognized feature gate: Example Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.703394 4751 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.703514 4751 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.703611 4751 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.703716 4751 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.703991 4751 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.704093 4751 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.704182 4751 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.704284 4751 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.704426 4751 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.704524 4751 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.704615 4751 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.704708 4751 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.704815 4751 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.704912 4751 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.705017 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.705109 4751 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.705198 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.705286 4751 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.705418 4751 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.705516 4751 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.705604 4751 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.705709 4751 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.705803 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.705891 4751 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.705990 4751 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.706080 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.706169 4751 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.706258 4751 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.706389 4751 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.706490 4751 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.706579 4751 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.706703 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.706802 4751 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.706892 4751 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.706980 4751 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.707113 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.707208 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.707297 4751 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.707431 4751 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.707533 4751 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.707628 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.707719 4751 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.707826 4751 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.707921 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.708010 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.708255 4751 flags.go:64] FLAG: --address="0.0.0.0" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.708407 4751 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.708520 4751 flags.go:64] FLAG: --anonymous-auth="true" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.708617 4751 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.708730 4751 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.708827 4751 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.708922 4751 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.709018 4751 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.709111 4751 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.709203 4751 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.709296 4751 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.709571 4751 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.709675 4751 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.709767 4751 flags.go:64] FLAG: --cgroup-root="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.709858 4751 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.709951 4751 flags.go:64] FLAG: --client-ca-file="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.710060 4751 flags.go:64] FLAG: --cloud-config="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.710187 4751 flags.go:64] FLAG: --cloud-provider="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.710321 4751 flags.go:64] FLAG: --cluster-dns="[]" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.710480 4751 flags.go:64] FLAG: --cluster-domain="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.710598 4751 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.710717 4751 flags.go:64] FLAG: --config-dir="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.710817 4751 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.710912 4751 flags.go:64] FLAG: --container-log-max-files="5" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.711032 4751 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.711129 4751 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.711245 4751 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.711406 4751 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.711508 4751 flags.go:64] FLAG: --contention-profiling="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.711619 4751 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.711724 4751 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.711819 4751 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.711912 4751 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.712017 4751 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.712121 4751 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.712214 4751 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.712305 4751 flags.go:64] FLAG: --enable-load-reader="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.712434 4751 flags.go:64] FLAG: --enable-server="true" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.712530 4751 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.712631 4751 flags.go:64] FLAG: --event-burst="100" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.712756 4751 flags.go:64] FLAG: --event-qps="50" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.712861 4751 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.712955 4751 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.713049 4751 flags.go:64] FLAG: --eviction-hard="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.713146 4751 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.713239 4751 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.713364 4751 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.713468 4751 flags.go:64] FLAG: --eviction-soft="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.713563 4751 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.713681 4751 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.713779 4751 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.713871 4751 flags.go:64] FLAG: --experimental-mounter-path="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.713963 4751 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.714055 4751 flags.go:64] FLAG: --fail-swap-on="true" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.714147 4751 flags.go:64] FLAG: --feature-gates="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.714246 4751 flags.go:64] FLAG: --file-check-frequency="20s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.714377 4751 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.714479 4751 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.714573 4751 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.714666 4751 flags.go:64] FLAG: --healthz-port="10248" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.714758 4751 flags.go:64] FLAG: --help="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.714850 4751 flags.go:64] FLAG: --hostname-override="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.714941 4751 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.715075 4751 flags.go:64] FLAG: --http-check-frequency="20s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.715176 4751 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.715267 4751 flags.go:64] FLAG: --image-credential-provider-config="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.715436 4751 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.715537 4751 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.715629 4751 flags.go:64] FLAG: --image-service-endpoint="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.715720 4751 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.715829 4751 flags.go:64] FLAG: --kube-api-burst="100" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.715926 4751 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.716020 4751 flags.go:64] FLAG: --kube-api-qps="50" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.716111 4751 flags.go:64] FLAG: --kube-reserved="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.716203 4751 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.716306 4751 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.716440 4751 flags.go:64] FLAG: --kubelet-cgroups="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.716550 4751 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.716646 4751 flags.go:64] FLAG: --lock-file="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.716740 4751 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.716835 4751 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.716927 4751 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717024 4751 flags.go:64] FLAG: --log-json-split-stream="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717115 4751 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717209 4751 flags.go:64] FLAG: --log-text-split-stream="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717311 4751 flags.go:64] FLAG: --logging-format="text" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717484 4751 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717543 4751 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717556 4751 flags.go:64] FLAG: --manifest-url="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717567 4751 flags.go:64] FLAG: --manifest-url-header="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717585 4751 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717596 4751 flags.go:64] FLAG: --max-open-files="1000000" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717613 4751 flags.go:64] FLAG: --max-pods="110" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717623 4751 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717633 4751 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717642 4751 flags.go:64] FLAG: --memory-manager-policy="None" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717651 4751 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717663 4751 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717673 4751 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717683 4751 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717707 4751 flags.go:64] FLAG: --node-status-max-images="50" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717716 4751 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717726 4751 flags.go:64] FLAG: --oom-score-adj="-999" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717737 4751 flags.go:64] FLAG: --pod-cidr="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717746 4751 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717761 4751 flags.go:64] FLAG: --pod-manifest-path="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717771 4751 flags.go:64] FLAG: --pod-max-pids="-1" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717780 4751 flags.go:64] FLAG: --pods-per-core="0" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717789 4751 flags.go:64] FLAG: --port="10250" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717799 4751 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717808 4751 flags.go:64] FLAG: --provider-id="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717817 4751 flags.go:64] FLAG: --qos-reserved="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717826 4751 flags.go:64] FLAG: --read-only-port="10255" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717836 4751 flags.go:64] FLAG: --register-node="true" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717846 4751 flags.go:64] FLAG: --register-schedulable="true" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717858 4751 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717875 4751 flags.go:64] FLAG: --registry-burst="10" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717884 4751 flags.go:64] FLAG: --registry-qps="5" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717893 4751 flags.go:64] FLAG: --reserved-cpus="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717902 4751 flags.go:64] FLAG: --reserved-memory="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717913 4751 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717922 4751 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717931 4751 flags.go:64] FLAG: --rotate-certificates="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717941 4751 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717950 4751 flags.go:64] FLAG: --runonce="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717958 4751 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717968 4751 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717977 4751 flags.go:64] FLAG: --seccomp-default="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717986 4751 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717995 4751 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718005 4751 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718015 4751 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718024 4751 flags.go:64] FLAG: --storage-driver-password="root" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718034 4751 flags.go:64] FLAG: --storage-driver-secure="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718044 4751 flags.go:64] FLAG: --storage-driver-table="stats" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718053 4751 flags.go:64] FLAG: --storage-driver-user="root" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718062 4751 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718072 4751 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718083 4751 flags.go:64] FLAG: --system-cgroups="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718092 4751 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718107 4751 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718115 4751 flags.go:64] FLAG: --tls-cert-file="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718124 4751 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718138 4751 flags.go:64] FLAG: --tls-min-version="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718147 4751 flags.go:64] FLAG: --tls-private-key-file="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718156 4751 flags.go:64] FLAG: --topology-manager-policy="none" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718165 4751 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718174 4751 flags.go:64] FLAG: --topology-manager-scope="container" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718183 4751 flags.go:64] FLAG: --v="2" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718195 4751 flags.go:64] FLAG: --version="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718207 4751 flags.go:64] FLAG: --vmodule="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718218 4751 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718228 4751 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718489 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718501 4751 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718512 4751 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718521 4751 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718531 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718539 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718547 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718558 4751 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718569 4751 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718580 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718589 4751 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718598 4751 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718608 4751 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718619 4751 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718630 4751 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718640 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718651 4751 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718663 4751 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718673 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718682 4751 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718692 4751 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718702 4751 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718711 4751 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718719 4751 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718727 4751 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718735 4751 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718743 4751 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718750 4751 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718758 4751 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718767 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718776 4751 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718786 4751 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718796 4751 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718805 4751 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718818 4751 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718828 4751 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718837 4751 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718847 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718857 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718864 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718875 4751 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718885 4751 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718894 4751 feature_gate.go:330] unrecognized feature gate: Example Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718903 4751 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718911 4751 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718918 4751 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718926 4751 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718935 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718943 4751 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718951 4751 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718959 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718966 4751 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718974 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719011 4751 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719021 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719028 4751 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719038 4751 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719049 4751 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719057 4751 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719066 4751 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719074 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719081 4751 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719089 4751 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719096 4751 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719104 4751 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719112 4751 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719120 4751 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719127 4751 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719134 4751 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719142 4751 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719151 4751 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.719164 4751 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.732078 4751 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.732135 4751 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732302 4751 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732364 4751 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732374 4751 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732386 4751 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732398 4751 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732407 4751 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732417 4751 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732426 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732434 4751 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732442 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732449 4751 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732457 4751 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732465 4751 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732472 4751 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732480 4751 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732488 4751 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732495 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732503 4751 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732511 4751 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732519 4751 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732528 4751 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732536 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732544 4751 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732553 4751 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732561 4751 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732569 4751 feature_gate.go:330] unrecognized feature gate: Example Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732577 4751 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732586 4751 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732593 4751 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732601 4751 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732609 4751 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732617 4751 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732624 4751 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732632 4751 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732642 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732649 4751 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732657 4751 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732668 4751 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732677 4751 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732685 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732695 4751 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732705 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732714 4751 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732723 4751 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732733 4751 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732744 4751 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732752 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732760 4751 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732769 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732776 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732784 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732791 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732799 4751 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732807 4751 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732815 4751 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732822 4751 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732833 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732841 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732848 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732857 4751 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732864 4751 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732872 4751 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732879 4751 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732887 4751 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732895 4751 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732902 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732910 4751 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732917 4751 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732925 4751 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732933 4751 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732942 4751 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.732955 4751 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733216 4751 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733230 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733239 4751 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733250 4751 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733261 4751 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733271 4751 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733279 4751 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733289 4751 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733297 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733306 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733313 4751 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733345 4751 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733354 4751 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733362 4751 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733369 4751 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733377 4751 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733386 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733394 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733402 4751 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733410 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733417 4751 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733424 4751 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733432 4751 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733439 4751 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733447 4751 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733457 4751 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733466 4751 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733474 4751 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733483 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733490 4751 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733498 4751 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733505 4751 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733513 4751 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733521 4751 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733529 4751 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733538 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733545 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733553 4751 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733561 4751 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733570 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733577 4751 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733585 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733592 4751 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733600 4751 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733609 4751 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733616 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733624 4751 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733632 4751 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733639 4751 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733647 4751 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733654 4751 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733663 4751 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733671 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733678 4751 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733686 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733694 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733702 4751 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733709 4751 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733717 4751 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733725 4751 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733735 4751 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733744 4751 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733753 4751 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733761 4751 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733770 4751 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733778 4751 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733787 4751 feature_gate.go:330] unrecognized feature gate: Example Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733796 4751 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733804 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733814 4751 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733824 4751 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.733836 4751 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.735309 4751 server.go:940] "Client rotation is on, will bootstrap in background" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.744595 4751 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.744727 4751 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.747381 4751 server.go:997] "Starting client certificate rotation" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.747436 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.747664 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-01 22:00:11.754174236 +0000 UTC Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.747771 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.777617 4751 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.780394 4751 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 21:14:21 crc kubenswrapper[4751]: E0130 21:14:21.782144 4751 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.799752 4751 log.go:25] "Validated CRI v1 runtime API" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.839605 4751 log.go:25] "Validated CRI v1 image API" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.842299 4751 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.850267 4751 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-30-21-10-00-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.850315 4751 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.877514 4751 manager.go:217] Machine: {Timestamp:2026-01-30 21:14:21.87360574 +0000 UTC m=+0.619428459 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd BootID:1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:f0:b1:31 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:f0:b1:31 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:b2:de:88 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:bd:80:e7 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:b1:a5:02 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:a2:6f:a3 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:f2:12:a0:7f:40:cc Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:72:bc:5e:8d:70:6b Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.877922 4751 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.878232 4751 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.878713 4751 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.879071 4751 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.879126 4751 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.879504 4751 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.879522 4751 container_manager_linux.go:303] "Creating device plugin manager" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.880435 4751 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.880478 4751 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.881600 4751 state_mem.go:36] "Initialized new in-memory state store" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.881775 4751 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.886828 4751 kubelet.go:418] "Attempting to sync node with API server" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.886864 4751 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.886890 4751 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.887028 4751 kubelet.go:324] "Adding apiserver pod source" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.887051 4751 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.893919 4751 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.893946 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.894033 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:21 crc kubenswrapper[4751]: E0130 21:14:21.894104 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:14:21 crc kubenswrapper[4751]: E0130 21:14:21.894132 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.895886 4751 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.899076 4751 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.901398 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.901450 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.901475 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.901489 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.901510 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.901524 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.901536 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.901558 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.901572 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.901586 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.901613 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.901627 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.903601 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.904316 4751 server.go:1280] "Started kubelet" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.905819 4751 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.906042 4751 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.906311 4751 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:21 crc systemd[1]: Started Kubernetes Kubelet. Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.908649 4751 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.909908 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.910002 4751 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.911796 4751 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.911838 4751 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.911797 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 03:47:55.262998858 +0000 UTC Jan 30 21:14:21 crc kubenswrapper[4751]: E0130 21:14:21.911860 4751 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.911951 4751 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 21:14:21 crc kubenswrapper[4751]: E0130 21:14:21.912009 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="200ms" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.913431 4751 factory.go:55] Registering systemd factory Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.913494 4751 factory.go:221] Registration of the systemd container factory successfully Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.913658 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:21 crc kubenswrapper[4751]: E0130 21:14:21.913795 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.919083 4751 factory.go:153] Registering CRI-O factory Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.919122 4751 factory.go:221] Registration of the crio container factory successfully Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.919223 4751 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.919252 4751 factory.go:103] Registering Raw factory Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.919275 4751 manager.go:1196] Started watching for new ooms in manager Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.920224 4751 manager.go:319] Starting recovery of all containers Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.921571 4751 server.go:460] "Adding debug handlers to kubelet server" Jan 30 21:14:21 crc kubenswrapper[4751]: E0130 21:14:21.927605 4751 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.39:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f9eb11091ae73 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 21:14:21.904277107 +0000 UTC m=+0.650099786,LastTimestamp:2026-01-30 21:14:21.904277107 +0000 UTC m=+0.650099786,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.935919 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936000 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936026 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936049 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936125 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936144 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936168 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936189 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936211 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936230 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936249 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936270 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936358 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936383 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936403 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936421 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936471 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936488 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936507 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936525 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936547 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936565 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936585 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936604 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936624 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936642 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936665 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936689 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936710 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936730 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936751 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936770 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936830 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936847 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936867 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936887 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936907 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936927 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936953 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936973 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936992 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937012 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937031 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937050 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937070 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937089 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937108 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937127 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937145 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937164 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937182 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937204 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937228 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937250 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937273 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937294 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937316 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937364 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937382 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937471 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937493 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.938218 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.938322 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.938411 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.938491 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.938516 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.938580 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.938603 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.938656 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.938691 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.940919 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941029 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941081 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941113 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941154 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941179 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941204 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941242 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941266 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941301 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941358 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941386 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941417 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941439 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941470 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941494 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941517 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941549 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941573 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941604 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941627 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941654 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941690 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941714 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941751 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941792 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941817 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941849 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941876 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941899 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941944 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941968 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.942002 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.942026 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.942081 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.942122 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.942160 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.942198 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.942233 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.949685 4751 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.949793 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.949842 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.949878 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.949910 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.949938 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.949965 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.949991 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950018 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950045 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950072 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950099 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950130 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950156 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950186 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950246 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950272 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950299 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950360 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950391 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950476 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950505 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950574 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950603 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950685 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950713 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950782 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950813 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950913 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950982 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951010 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951114 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951140 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951205 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951376 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951413 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951487 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951519 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951590 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951620 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951697 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951728 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951805 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951875 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951906 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951989 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952073 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952105 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952169 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952200 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952263 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952291 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952409 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952479 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952508 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952573 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952602 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952628 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952693 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952767 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952817 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952884 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952911 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952990 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953063 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953090 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953149 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953173 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953257 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953280 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953366 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953515 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953549 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953623 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953645 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953703 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953804 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953835 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953897 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953923 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953995 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954023 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954079 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954102 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954184 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954207 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954265 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954289 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954310 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954414 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954437 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954458 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954539 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954595 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954619 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954641 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954709 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954738 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954807 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954832 4751 reconstruct.go:97] "Volume reconstruction finished" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954910 4751 reconciler.go:26] "Reconciler: start to sync state" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.957194 4751 manager.go:324] Recovery completed Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.971770 4751 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.973339 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.973871 4751 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.974455 4751 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.974505 4751 kubelet.go:2335] "Starting kubelet main sync loop" Jan 30 21:14:21 crc kubenswrapper[4751]: E0130 21:14:21.974620 4751 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.975633 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.975680 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.975700 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.976535 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:21 crc kubenswrapper[4751]: E0130 21:14:21.976650 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.978115 4751 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.978149 4751 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.978179 4751 state_mem.go:36] "Initialized new in-memory state store" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.003065 4751 policy_none.go:49] "None policy: Start" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.004027 4751 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.004071 4751 state_mem.go:35] "Initializing new in-memory state store" Jan 30 21:14:22 crc kubenswrapper[4751]: E0130 21:14:22.013111 4751 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.065932 4751 manager.go:334] "Starting Device Plugin manager" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.066060 4751 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.066086 4751 server.go:79] "Starting device plugin registration server" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.066772 4751 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.066810 4751 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.067250 4751 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.067421 4751 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.067436 4751 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.075221 4751 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.075311 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.076746 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.076803 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.076823 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.077038 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.077442 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.077515 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.078529 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.078574 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.078590 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.078920 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.078971 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.079002 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.079022 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.079234 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.079377 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.081120 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.081170 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.081188 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.081439 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.081599 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.081660 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.081677 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.081740 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.081960 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:22 crc kubenswrapper[4751]: E0130 21:14:22.082167 4751 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.083396 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.083433 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.083451 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.083778 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.083944 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.084013 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.084216 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.084298 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.084398 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.086013 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.086049 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.086072 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.086141 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.086164 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.086175 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.086748 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.086846 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.088265 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.088350 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.088369 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:22 crc kubenswrapper[4751]: E0130 21:14:22.113595 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="400ms" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159068 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159360 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159563 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159617 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159647 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159677 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159697 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159745 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159765 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159786 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159828 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159847 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159866 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159884 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159902 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.166988 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.168564 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.168688 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.168775 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.168877 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:14:22 crc kubenswrapper[4751]: E0130 21:14:22.169490 4751 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.39:6443: connect: connection refused" node="crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261239 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261368 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261419 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261465 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261510 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261555 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261584 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261657 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261601 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261606 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261701 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261741 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261601 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261717 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261683 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261813 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261873 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261874 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261939 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.262026 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.262002 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.262091 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.262189 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.262228 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.262249 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.262260 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.262189 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.262193 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.262379 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.262400 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.370004 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.372109 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.372176 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.372196 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.372266 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:14:22 crc kubenswrapper[4751]: E0130 21:14:22.373042 4751 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.39:6443: connect: connection refused" node="crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.416738 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.426388 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.445548 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.470680 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: W0130 21:14:22.475637 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-36b802131419fb71069a2b542e78ddf0b6cce801cce0852720d517be58c4a524 WatchSource:0}: Error finding container 36b802131419fb71069a2b542e78ddf0b6cce801cce0852720d517be58c4a524: Status 404 returned error can't find the container with id 36b802131419fb71069a2b542e78ddf0b6cce801cce0852720d517be58c4a524 Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.477068 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: W0130 21:14:22.477638 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-c0dac8be882d0e246e6cccfd0c98124e06c22a845c68fe8c125449386f5c3f6c WatchSource:0}: Error finding container c0dac8be882d0e246e6cccfd0c98124e06c22a845c68fe8c125449386f5c3f6c: Status 404 returned error can't find the container with id c0dac8be882d0e246e6cccfd0c98124e06c22a845c68fe8c125449386f5c3f6c Jan 30 21:14:22 crc kubenswrapper[4751]: W0130 21:14:22.490057 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-b1867bd6b8752103bf3861026128570fd9dfa9dd6a1f4d0f448713a4f78fbe5d WatchSource:0}: Error finding container b1867bd6b8752103bf3861026128570fd9dfa9dd6a1f4d0f448713a4f78fbe5d: Status 404 returned error can't find the container with id b1867bd6b8752103bf3861026128570fd9dfa9dd6a1f4d0f448713a4f78fbe5d Jan 30 21:14:22 crc kubenswrapper[4751]: W0130 21:14:22.498404 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-5de1be865ce6d22f44af2d043c618e71fa7e30d9368d721637c5edf29ceb4b15 WatchSource:0}: Error finding container 5de1be865ce6d22f44af2d043c618e71fa7e30d9368d721637c5edf29ceb4b15: Status 404 returned error can't find the container with id 5de1be865ce6d22f44af2d043c618e71fa7e30d9368d721637c5edf29ceb4b15 Jan 30 21:14:22 crc kubenswrapper[4751]: W0130 21:14:22.508241 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-6a9243fea3bf525aecd14125439d384187fe4b04a1c15fa233f887ba4bd6518a WatchSource:0}: Error finding container 6a9243fea3bf525aecd14125439d384187fe4b04a1c15fa233f887ba4bd6518a: Status 404 returned error can't find the container with id 6a9243fea3bf525aecd14125439d384187fe4b04a1c15fa233f887ba4bd6518a Jan 30 21:14:22 crc kubenswrapper[4751]: E0130 21:14:22.515251 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="800ms" Jan 30 21:14:22 crc kubenswrapper[4751]: W0130 21:14:22.759233 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:22 crc kubenswrapper[4751]: E0130 21:14:22.759386 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.773191 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.774555 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.774623 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.774643 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.774684 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:14:22 crc kubenswrapper[4751]: E0130 21:14:22.775141 4751 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.39:6443: connect: connection refused" node="crc" Jan 30 21:14:22 crc kubenswrapper[4751]: W0130 21:14:22.887310 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:22 crc kubenswrapper[4751]: E0130 21:14:22.887544 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:14:22 crc kubenswrapper[4751]: W0130 21:14:22.896176 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:22 crc kubenswrapper[4751]: E0130 21:14:22.896265 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.906976 4751 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.911990 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 18:19:28.252979258 +0000 UTC Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.980126 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c0dac8be882d0e246e6cccfd0c98124e06c22a845c68fe8c125449386f5c3f6c"} Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.981743 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"36b802131419fb71069a2b542e78ddf0b6cce801cce0852720d517be58c4a524"} Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.983850 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6a9243fea3bf525aecd14125439d384187fe4b04a1c15fa233f887ba4bd6518a"} Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.985868 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5de1be865ce6d22f44af2d043c618e71fa7e30d9368d721637c5edf29ceb4b15"} Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.987049 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b1867bd6b8752103bf3861026128570fd9dfa9dd6a1f4d0f448713a4f78fbe5d"} Jan 30 21:14:23 crc kubenswrapper[4751]: E0130 21:14:23.316900 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="1.6s" Jan 30 21:14:23 crc kubenswrapper[4751]: W0130 21:14:23.386908 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:23 crc kubenswrapper[4751]: E0130 21:14:23.387030 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.575541 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.577577 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.577636 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.577655 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.577692 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:14:23 crc kubenswrapper[4751]: E0130 21:14:23.578255 4751 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.39:6443: connect: connection refused" node="crc" Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.902805 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 21:14:23 crc kubenswrapper[4751]: E0130 21:14:23.904262 4751 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.907314 4751 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.912397 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 09:10:12.540771558 +0000 UTC Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.991635 4751 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea" exitCode=0 Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.991756 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea"} Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.991836 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.993452 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.993482 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.993491 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.996030 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db"} Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.996083 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13"} Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.996105 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817"} Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.000898 4751 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac" exitCode=0 Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.000950 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac"} Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.001171 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.002317 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.002354 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.002362 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.003551 4751 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="591aa13b2c2298e81c38fc6e0ddbf8f0c5025d86b7c40ec3c5ee4749ce6804a8" exitCode=0 Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.003629 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"591aa13b2c2298e81c38fc6e0ddbf8f0c5025d86b7c40ec3c5ee4749ce6804a8"} Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.003752 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.004935 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.004981 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.005001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.006357 4751 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="657bffa589cf69814f91728996ae779354f7ad9f62606bbba6fcc4107a06cfb3" exitCode=0 Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.006380 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"657bffa589cf69814f91728996ae779354f7ad9f62606bbba6fcc4107a06cfb3"} Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.006503 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.007764 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.007815 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.007833 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.008737 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.009447 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.009492 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.009509 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.908234 4751 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.912513 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 21:01:00.386132096 +0000 UTC Jan 30 21:14:24 crc kubenswrapper[4751]: E0130 21:14:24.918933 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="3.2s" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.014417 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.014890 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c"} Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.015708 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.015744 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.015761 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.020171 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b"} Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.020235 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9"} Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.020255 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1"} Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.020274 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1"} Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.024869 4751 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3b43a9d38e68aba1f763848cac4817d99a5f5f11f10a3f3da7ae1ec8845e90b8" exitCode=0 Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.024948 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3b43a9d38e68aba1f763848cac4817d99a5f5f11f10a3f3da7ae1ec8845e90b8"} Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.025014 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.025947 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.025990 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.026007 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.026888 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"efe6b37689f97464405ccee9a22eff435e66be2c6103b5187255056bf0febaec"} Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.026974 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.027946 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.027975 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.027985 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.031822 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2c037db6ab27fdac0f9290b2d34f883cc22ac3c79f2b52a16e6579df97474da3"} Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.032051 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.032489 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e27dab33e7af8b89e8b6a3f6d3beff399121ca17e50406b83ec8a553598834ac"} Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.032610 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"751b154b2de8ba6d171eac7b82c77498ed54b38d4c6759e35dacf49c57e3f945"} Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.033619 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.033769 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.033899 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:25 crc kubenswrapper[4751]: W0130 21:14:25.091578 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:25 crc kubenswrapper[4751]: E0130 21:14:25.091700 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.179312 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.180545 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.180582 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.180594 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.180619 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:14:25 crc kubenswrapper[4751]: E0130 21:14:25.181211 4751 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.39:6443: connect: connection refused" node="crc" Jan 30 21:14:25 crc kubenswrapper[4751]: W0130 21:14:25.382409 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:25 crc kubenswrapper[4751]: E0130 21:14:25.382491 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.912729 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 19:14:46.730516497 +0000 UTC Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.039850 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63"} Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.039938 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.041522 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.041557 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.041570 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.043632 4751 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="915d07a3289fc8f3a7221446ffa0562703611899bec4819f77af631ecbeb26c2" exitCode=0 Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.043741 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.043787 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.044431 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"915d07a3289fc8f3a7221446ffa0562703611899bec4819f77af631ecbeb26c2"} Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.044496 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.044719 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.044955 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.044975 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.045134 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.045311 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.046712 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.046760 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.046757 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.046834 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.046861 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.046933 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.046968 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.046985 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.046780 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.913008 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 19:31:35.323365111 +0000 UTC Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.053316 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7187f01f2a4bdab72ec724f553bfce1e954fd9793874021f9c28152b7d33914c"} Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.053409 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f36ab607a38bfd32d8bfe64da36280f9b5efaad895c6c26880a00b9dd38ce5a4"} Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.053434 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.053480 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.053433 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a6ae9a047d02cc4dcd6a27a4561a660059971561db33c72fdaaa10e177e091c8"} Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.054979 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.055059 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.055084 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.802650 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.802884 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.804881 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.804938 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.804958 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.913860 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 11:10:57.825056845 +0000 UTC Jan 30 21:14:28 crc kubenswrapper[4751]: I0130 21:14:28.062773 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ca7bafdd301335a08edb5982410cee5965742f6b772c88c52ae3630214a4b631"} Jan 30 21:14:28 crc kubenswrapper[4751]: I0130 21:14:28.062833 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cdeab12e361345755bc4e07dae7c7355ad83d93a67d27e35596c4b817e2e7699"} Jan 30 21:14:28 crc kubenswrapper[4751]: I0130 21:14:28.062915 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:28 crc kubenswrapper[4751]: I0130 21:14:28.064185 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:28 crc kubenswrapper[4751]: I0130 21:14:28.064254 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:28 crc kubenswrapper[4751]: I0130 21:14:28.064281 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:28 crc kubenswrapper[4751]: I0130 21:14:28.177164 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 21:14:28 crc kubenswrapper[4751]: I0130 21:14:28.381986 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:28 crc kubenswrapper[4751]: I0130 21:14:28.383690 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:28 crc kubenswrapper[4751]: I0130 21:14:28.383751 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:28 crc kubenswrapper[4751]: I0130 21:14:28.383771 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:28 crc kubenswrapper[4751]: I0130 21:14:28.383806 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:14:28 crc kubenswrapper[4751]: I0130 21:14:28.914717 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 02:58:03.824856609 +0000 UTC Jan 30 21:14:29 crc kubenswrapper[4751]: I0130 21:14:29.067540 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:29 crc kubenswrapper[4751]: I0130 21:14:29.069903 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:29 crc kubenswrapper[4751]: I0130 21:14:29.069966 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:29 crc kubenswrapper[4751]: I0130 21:14:29.069993 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:29 crc kubenswrapper[4751]: I0130 21:14:29.275415 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:29 crc kubenswrapper[4751]: I0130 21:14:29.275614 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:29 crc kubenswrapper[4751]: I0130 21:14:29.276923 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:29 crc kubenswrapper[4751]: I0130 21:14:29.276982 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:29 crc kubenswrapper[4751]: I0130 21:14:29.277004 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:29 crc kubenswrapper[4751]: I0130 21:14:29.915162 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 22:55:08.494797264 +0000 UTC Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.141847 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.142086 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.143788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.143875 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.143896 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.450674 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.450918 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.452529 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.452604 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.452621 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.601578 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.602711 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.602958 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.604369 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.604418 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.604494 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.643140 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.799635 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.807028 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.915461 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 03:00:04.650764547 +0000 UTC Jan 30 21:14:31 crc kubenswrapper[4751]: I0130 21:14:31.073382 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:31 crc kubenswrapper[4751]: I0130 21:14:31.073466 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:31 crc kubenswrapper[4751]: I0130 21:14:31.074953 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:31 crc kubenswrapper[4751]: I0130 21:14:31.075006 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:31 crc kubenswrapper[4751]: I0130 21:14:31.075025 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:31 crc kubenswrapper[4751]: I0130 21:14:31.074954 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:31 crc kubenswrapper[4751]: I0130 21:14:31.075124 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:31 crc kubenswrapper[4751]: I0130 21:14:31.075205 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:31 crc kubenswrapper[4751]: I0130 21:14:31.916023 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 06:07:53.029691276 +0000 UTC Jan 30 21:14:32 crc kubenswrapper[4751]: I0130 21:14:32.076749 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:32 crc kubenswrapper[4751]: I0130 21:14:32.078143 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:32 crc kubenswrapper[4751]: I0130 21:14:32.078187 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:32 crc kubenswrapper[4751]: I0130 21:14:32.078204 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:32 crc kubenswrapper[4751]: E0130 21:14:32.082503 4751 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 21:14:32 crc kubenswrapper[4751]: I0130 21:14:32.916709 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 21:49:56.708767567 +0000 UTC Jan 30 21:14:33 crc kubenswrapper[4751]: I0130 21:14:33.337975 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:33 crc kubenswrapper[4751]: I0130 21:14:33.338217 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:33 crc kubenswrapper[4751]: I0130 21:14:33.340070 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:33 crc kubenswrapper[4751]: I0130 21:14:33.340120 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:33 crc kubenswrapper[4751]: I0130 21:14:33.340139 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:33 crc kubenswrapper[4751]: I0130 21:14:33.344285 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:33 crc kubenswrapper[4751]: I0130 21:14:33.917175 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 08:37:53.106884682 +0000 UTC Jan 30 21:14:34 crc kubenswrapper[4751]: I0130 21:14:34.082396 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:34 crc kubenswrapper[4751]: I0130 21:14:34.083473 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:34 crc kubenswrapper[4751]: I0130 21:14:34.083536 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:34 crc kubenswrapper[4751]: I0130 21:14:34.083554 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:34 crc kubenswrapper[4751]: I0130 21:14:34.918374 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 22:36:37.397229511 +0000 UTC Jan 30 21:14:35 crc kubenswrapper[4751]: I0130 21:14:35.259543 4751 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 30 21:14:35 crc kubenswrapper[4751]: I0130 21:14:35.259631 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 30 21:14:35 crc kubenswrapper[4751]: W0130 21:14:35.802771 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 30 21:14:35 crc kubenswrapper[4751]: I0130 21:14:35.802873 4751 trace.go:236] Trace[414018107]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 21:14:25.800) (total time: 10001ms): Jan 30 21:14:35 crc kubenswrapper[4751]: Trace[414018107]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (21:14:35.802) Jan 30 21:14:35 crc kubenswrapper[4751]: Trace[414018107]: [10.001898977s] [10.001898977s] END Jan 30 21:14:35 crc kubenswrapper[4751]: E0130 21:14:35.802899 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 30 21:14:35 crc kubenswrapper[4751]: I0130 21:14:35.908360 4751 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 30 21:14:35 crc kubenswrapper[4751]: I0130 21:14:35.919588 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 10:13:32.257966145 +0000 UTC Jan 30 21:14:36 crc kubenswrapper[4751]: I0130 21:14:36.338569 4751 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 21:14:36 crc kubenswrapper[4751]: I0130 21:14:36.338625 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 21:14:36 crc kubenswrapper[4751]: I0130 21:14:36.505109 4751 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 21:14:36 crc kubenswrapper[4751]: I0130 21:14:36.505178 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 21:14:36 crc kubenswrapper[4751]: I0130 21:14:36.509403 4751 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 21:14:36 crc kubenswrapper[4751]: I0130 21:14:36.509457 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 21:14:36 crc kubenswrapper[4751]: I0130 21:14:36.919972 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 13:29:08.862525866 +0000 UTC Jan 30 21:14:37 crc kubenswrapper[4751]: I0130 21:14:37.921094 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 21:19:21.360248585 +0000 UTC Jan 30 21:14:37 crc kubenswrapper[4751]: I0130 21:14:37.990415 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 30 21:14:37 crc kubenswrapper[4751]: I0130 21:14:37.990753 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:37 crc kubenswrapper[4751]: I0130 21:14:37.992208 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:37 crc kubenswrapper[4751]: I0130 21:14:37.992262 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:37 crc kubenswrapper[4751]: I0130 21:14:37.992274 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:38 crc kubenswrapper[4751]: I0130 21:14:38.029561 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 30 21:14:38 crc kubenswrapper[4751]: I0130 21:14:38.092762 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:38 crc kubenswrapper[4751]: I0130 21:14:38.093917 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:38 crc kubenswrapper[4751]: I0130 21:14:38.093972 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:38 crc kubenswrapper[4751]: I0130 21:14:38.093990 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:38 crc kubenswrapper[4751]: I0130 21:14:38.106592 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 30 21:14:38 crc kubenswrapper[4751]: I0130 21:14:38.921450 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 21:23:38.659999399 +0000 UTC Jan 30 21:14:39 crc kubenswrapper[4751]: I0130 21:14:39.096239 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:39 crc kubenswrapper[4751]: I0130 21:14:39.098093 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:39 crc kubenswrapper[4751]: I0130 21:14:39.098160 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:39 crc kubenswrapper[4751]: I0130 21:14:39.098173 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:39 crc kubenswrapper[4751]: I0130 21:14:39.922620 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 18:02:39.977112097 +0000 UTC Jan 30 21:14:40 crc kubenswrapper[4751]: I0130 21:14:40.147432 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:40 crc kubenswrapper[4751]: I0130 21:14:40.147616 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:40 crc kubenswrapper[4751]: I0130 21:14:40.148920 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:40 crc kubenswrapper[4751]: I0130 21:14:40.148990 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:40 crc kubenswrapper[4751]: I0130 21:14:40.149018 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:40 crc kubenswrapper[4751]: I0130 21:14:40.151847 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:40 crc kubenswrapper[4751]: I0130 21:14:40.923193 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 15:06:49.227051537 +0000 UTC Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.100570 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.100640 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.101986 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.102071 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.102097 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:41 crc kubenswrapper[4751]: E0130 21:14:41.485572 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.488099 4751 trace.go:236] Trace[1353107066]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 21:14:26.545) (total time: 14942ms): Jan 30 21:14:41 crc kubenswrapper[4751]: Trace[1353107066]: ---"Objects listed" error: 14942ms (21:14:41.487) Jan 30 21:14:41 crc kubenswrapper[4751]: Trace[1353107066]: [14.942738788s] [14.942738788s] END Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.488205 4751 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.489835 4751 trace.go:236] Trace[1410619491]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 21:14:31.475) (total time: 10014ms): Jan 30 21:14:41 crc kubenswrapper[4751]: Trace[1410619491]: ---"Objects listed" error: 10014ms (21:14:41.489) Jan 30 21:14:41 crc kubenswrapper[4751]: Trace[1410619491]: [10.014455076s] [10.014455076s] END Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.489896 4751 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.490981 4751 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.491046 4751 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 21:14:41 crc kubenswrapper[4751]: E0130 21:14:41.492625 4751 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.506203 4751 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.526235 4751 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36486->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.526302 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36486->192.168.126.11:17697: read: connection reset by peer" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.526721 4751 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.526764 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.848408 4751 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.897391 4751 apiserver.go:52] "Watching apiserver" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.902702 4751 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.902945 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.903435 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.903472 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.903727 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:14:41 crc kubenswrapper[4751]: E0130 21:14:41.903494 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.903814 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.903858 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:41 crc kubenswrapper[4751]: E0130 21:14:41.903944 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.903588 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:41 crc kubenswrapper[4751]: E0130 21:14:41.904155 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.906662 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.906789 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.906827 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.906546 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.907161 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.907248 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.907448 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.909317 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.911438 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.913486 4751 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.923697 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 04:35:22.664207397 +0000 UTC Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.951241 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.961284 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.974782 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.989597 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994605 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994658 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994682 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994702 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994724 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994744 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994762 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994782 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994804 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994822 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994841 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994860 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994880 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994901 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994937 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994966 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995005 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995025 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995044 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995066 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995082 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995099 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995116 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995134 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995153 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995176 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995196 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995219 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995239 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995255 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995273 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995294 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995312 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995348 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995367 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995388 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995407 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995423 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995439 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995464 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995485 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995504 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995523 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995542 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995565 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995581 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995607 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995630 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995647 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995665 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995684 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995703 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995728 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995746 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995765 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995785 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995803 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995825 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995845 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995869 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995889 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996059 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996082 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996124 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996143 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996161 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996181 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996200 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996220 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996248 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996267 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996288 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996306 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996340 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996361 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996384 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996403 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996421 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996435 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996453 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996467 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996482 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996497 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996513 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996529 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996545 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996561 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996576 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996591 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996609 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996625 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996656 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996672 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996687 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996702 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996719 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996734 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996750 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996765 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.996780 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.996796 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.996812 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.997020 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.997173 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.997234 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.997258 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.997311 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.997456 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.997757 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.997750 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.997804 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.997785 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.997895 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:41.998059 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:14:42.498036114 +0000 UTC m=+21.243858873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.998180 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.998184 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.998483 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.998572 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.998615 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999014 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999048 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999074 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999098 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999122 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999145 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999154 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999168 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999205 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999296 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999383 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999437 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.998807 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999713 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999768 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999815 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999862 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999908 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999956 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999996 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000033 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000072 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000112 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000149 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000197 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000236 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000276 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000370 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000419 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000453 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000486 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000524 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000562 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000598 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000638 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000674 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000708 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000744 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000782 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000820 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000862 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000915 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000954 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000992 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001027 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001060 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001156 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001193 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001227 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001262 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001301 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001361 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001396 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001430 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001474 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001545 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001582 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001619 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001654 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001695 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001728 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001768 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001804 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001814 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001841 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001846 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001887 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001998 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002050 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002092 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002121 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002153 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002180 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002206 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002232 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002262 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002289 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002314 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002360 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002386 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002412 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002436 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002460 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002446 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002485 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002534 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002597 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002624 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002649 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002672 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002698 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002723 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002793 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002818 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002843 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002866 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002868 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002891 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002918 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002941 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002998 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003035 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003060 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003084 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003108 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003135 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003162 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003196 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003228 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003253 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003302 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003628 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003646 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003676 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003716 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003804 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003822 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003835 4751 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003853 4751 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003867 4751 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003880 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003895 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003906 4751 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003919 4751 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003932 4751 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003947 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003961 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003973 4751 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003986 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003947 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.004161 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.004196 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.004479 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.004690 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.004702 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.005004 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.006405 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.004679 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.005388 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.005300 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.005703 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.005739 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.006475 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003997 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.005868 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.005942 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.005982 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.006638 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.006703 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:42.506682026 +0000 UTC m=+21.252504675 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.006863 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.006990 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.007105 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.007172 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.007176 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.007375 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.007613 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.007755 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.008035 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.008397 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.008444 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.008541 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.008586 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.008940 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.008948 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.009019 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.009834 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.010005 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.010217 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.010231 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.010430 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.010545 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.011087 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.011254 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.011487 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.011756 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.012004 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.012049 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.012102 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.012396 4751 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.012520 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.013046 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.013316 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.013899 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.013753 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.015682 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.016423 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:42.516400223 +0000 UTC m=+21.262223072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.006524 4751 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.016823 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.016847 4751 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.016865 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.016882 4751 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.016899 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.016915 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.016930 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.028571 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.028609 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.028627 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.028698 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:42.52867173 +0000 UTC m=+21.274494379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.031039 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.031511 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.033003 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.033335 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.033819 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.033868 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.033945 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.033960 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.033807 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.035216 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.035704 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.035825 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.036185 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.036467 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.036488 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.036501 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.036549 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:42.536534075 +0000 UTC m=+21.282356724 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.036982 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.037303 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.037695 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.037838 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.037855 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.040464 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.040524 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.040810 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.040820 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.040857 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.041000 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.041026 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.041082 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.041110 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.041416 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.041463 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.041597 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.041782 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.042360 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.041756 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.041975 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.042143 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.042607 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.043043 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.043636 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.044046 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.045071 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.046253 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.046531 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.046713 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.047467 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.047650 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.048631 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.048701 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.049050 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.049304 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.049059 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.049588 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.049727 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.050014 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.050040 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.049990 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.050085 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.050533 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.050650 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.050684 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.050788 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.051134 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.051691 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.051836 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.051986 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.051997 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.051989 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.052262 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.052319 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.052583 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.052654 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.052681 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.052709 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.052719 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.052739 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.052769 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.052630 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.052824 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.052858 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.053197 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.053398 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.053507 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.053502 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.053621 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.053782 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.053909 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.054238 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.054234 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.054263 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.054459 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.054539 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.054543 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.054669 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.054701 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.054724 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.054688 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.054812 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.055814 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.055921 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.055949 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.056087 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.056667 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.056684 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.056736 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.056851 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.056898 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.056913 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.057087 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.057135 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.057187 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.057440 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.059124 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.061150 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.061258 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.061654 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.061767 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.062156 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.062451 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.062535 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.062588 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.062616 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.062729 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.062898 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.063079 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.063791 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.063936 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.065886 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.068640 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.074858 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.075410 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.075582 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.085986 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.090870 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.090961 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.096807 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.103701 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.106354 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.106915 4751 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63" exitCode=255 Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.106974 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63"} Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.117405 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.117893 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.118044 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.119377 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.119524 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120512 4751 scope.go:117] "RemoveContainer" containerID="0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120591 4751 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120609 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120714 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120736 4751 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120746 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120756 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120769 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120781 4751 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120794 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120806 4751 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120818 4751 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120848 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120863 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120878 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.121703 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.121944 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.121973 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.121994 4751 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122012 4751 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122030 4751 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122048 4751 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122066 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122086 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122104 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122398 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122423 4751 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122442 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122461 4751 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122478 4751 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122496 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122515 4751 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122531 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122565 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122584 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122603 4751 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122619 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122637 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122653 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122670 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122687 4751 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122703 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122720 4751 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122738 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122757 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122777 4751 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122794 4751 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122811 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122828 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122847 4751 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122864 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122881 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122898 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122916 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122933 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122950 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122967 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122985 4751 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123004 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123022 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123042 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123058 4751 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123076 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123095 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123111 4751 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123135 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123153 4751 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123171 4751 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123189 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123206 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123224 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123244 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123263 4751 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123281 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123300 4751 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123318 4751 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123366 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123385 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123403 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123420 4751 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123438 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123456 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123473 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123492 4751 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123511 4751 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123529 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123548 4751 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123565 4751 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123583 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123600 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123619 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123636 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123654 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123671 4751 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123687 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123719 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123735 4751 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123752 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123768 4751 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123787 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123805 4751 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123822 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123839 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123855 4751 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123871 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123887 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123903 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123919 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123937 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123954 4751 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123975 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123991 4751 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124009 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124025 4751 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124041 4751 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124061 4751 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124081 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124100 4751 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124117 4751 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124134 4751 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124150 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124170 4751 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124186 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124204 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124221 4751 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124238 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124254 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124271 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124318 4751 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124356 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124374 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124390 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124406 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124422 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124439 4751 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124455 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124473 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124491 4751 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124508 4751 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124524 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124543 4751 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124562 4751 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124580 4751 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124596 4751 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124612 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124627 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124643 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124659 4751 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124675 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124692 4751 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124708 4751 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124724 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124740 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124756 4751 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124772 4751 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124787 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124802 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124818 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124835 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124851 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124867 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124882 4751 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124899 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124916 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124932 4751 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124948 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124964 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124980 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124996 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.125014 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.125031 4751 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.125047 4751 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.125065 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.129899 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.141783 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.152098 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.163519 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.174756 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.183363 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.229917 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.242850 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:14:42 crc kubenswrapper[4751]: W0130 21:14:42.243814 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-9e93b94ff985cacc5b3b4759872fec230b8cb0d3eae55347a753e51e8e6dc32a WatchSource:0}: Error finding container 9e93b94ff985cacc5b3b4759872fec230b8cb0d3eae55347a753e51e8e6dc32a: Status 404 returned error can't find the container with id 9e93b94ff985cacc5b3b4759872fec230b8cb0d3eae55347a753e51e8e6dc32a Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.250707 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:14:42 crc kubenswrapper[4751]: W0130 21:14:42.267906 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-e17ef3b78fbfdf1726d9df156b0b0b217b2d101e105a28372e308ba3338ce1d8 WatchSource:0}: Error finding container e17ef3b78fbfdf1726d9df156b0b0b217b2d101e105a28372e308ba3338ce1d8: Status 404 returned error can't find the container with id e17ef3b78fbfdf1726d9df156b0b0b217b2d101e105a28372e308ba3338ce1d8 Jan 30 21:14:42 crc kubenswrapper[4751]: W0130 21:14:42.268388 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-86be573b52ab01f88c21acd0d53bbf28b3b51e60cad206573fb9d749616b4593 WatchSource:0}: Error finding container 86be573b52ab01f88c21acd0d53bbf28b3b51e60cad206573fb9d749616b4593: Status 404 returned error can't find the container with id 86be573b52ab01f88c21acd0d53bbf28b3b51e60cad206573fb9d749616b4593 Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.528932 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.529190 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:14:43.529155711 +0000 UTC m=+22.274978390 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.529421 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.529458 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.529492 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.529571 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.529625 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:43.529609362 +0000 UTC m=+22.275432031 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.530027 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.530079 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:43.530065693 +0000 UTC m=+22.275888362 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.530258 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.530293 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.530312 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.530419 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:43.530402011 +0000 UTC m=+22.276224690 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.630835 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.631020 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.631322 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.631453 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.631604 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:43.631584988 +0000 UTC m=+22.377407657 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.923934 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 14:40:13.769444623 +0000 UTC Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.111456 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152"} Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.111527 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"86be573b52ab01f88c21acd0d53bbf28b3b51e60cad206573fb9d749616b4593"} Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.113426 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7"} Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.113537 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22"} Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.113551 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e17ef3b78fbfdf1726d9df156b0b0b217b2d101e105a28372e308ba3338ce1d8"} Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.114880 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9e93b94ff985cacc5b3b4759872fec230b8cb0d3eae55347a753e51e8e6dc32a"} Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.116974 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.118826 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d"} Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.119078 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.127049 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.140846 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.153986 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.173689 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.186681 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.208500 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.223909 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.236793 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.253507 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.272111 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.291791 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.308847 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.330071 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.346564 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.351445 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.353577 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.371290 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.373390 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.386790 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.403500 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.421074 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.438350 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.455504 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.476162 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.500563 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.518575 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.537310 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.539653 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.539753 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.539803 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.539855 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.539919 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:14:45.539884192 +0000 UTC m=+24.285706881 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.539989 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.540036 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.540090 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:45.540065796 +0000 UTC m=+24.285888475 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.540143 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.540859 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.540890 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.541474 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:45.541453089 +0000 UTC m=+24.287275768 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.541564 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:45.541550671 +0000 UTC m=+24.287373360 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.555047 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.580532 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.607455 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.622360 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.635896 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.641226 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.641456 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.641499 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.641519 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.641583 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:45.641565172 +0000 UTC m=+24.387387831 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.924173 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 00:17:37.907727507 +0000 UTC Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.976478 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.976573 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.976628 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.976708 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.976872 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.976936 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.982168 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.982842 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.983855 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.984437 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.985379 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.985829 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.986365 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.987194 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.987815 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.988679 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.989126 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.990113 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.990615 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.991076 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.991905 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.992388 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.993226 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.993584 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.994096 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.995302 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.995796 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.996804 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.997229 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.998248 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.998665 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.999250 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.000287 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.000788 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.001695 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.002188 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.003009 4751 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.003110 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.004687 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.005630 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.006058 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.007480 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.008070 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.008949 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.009594 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.010592 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.011063 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.011987 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.012622 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.013532 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.013975 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.014813 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.015315 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.016412 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.016904 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.017685 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.018119 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.018949 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.019821 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.020267 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.924382 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 22:17:04.707360338 +0000 UTC Jan 30 21:14:45 crc kubenswrapper[4751]: I0130 21:14:45.558447 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:14:45 crc kubenswrapper[4751]: I0130 21:14:45.558572 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:45 crc kubenswrapper[4751]: I0130 21:14:45.558648 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.558712 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:14:49.558680592 +0000 UTC m=+28.304503271 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.558774 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:14:45 crc kubenswrapper[4751]: I0130 21:14:45.558831 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.558852 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:49.558829855 +0000 UTC m=+28.304652544 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.558863 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.558939 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.558968 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.558991 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.559051 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:49.55902895 +0000 UTC m=+28.304851639 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.559098 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:49.55907041 +0000 UTC m=+28.304893089 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:14:45 crc kubenswrapper[4751]: I0130 21:14:45.659138 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.659357 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.659378 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.659391 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.659451 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:49.659436199 +0000 UTC m=+28.405258858 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:45 crc kubenswrapper[4751]: I0130 21:14:45.925251 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 12:09:17.260773566 +0000 UTC Jan 30 21:14:45 crc kubenswrapper[4751]: I0130 21:14:45.975099 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:45 crc kubenswrapper[4751]: I0130 21:14:45.975112 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:45 crc kubenswrapper[4751]: I0130 21:14:45.975238 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.975389 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.975489 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.975576 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:14:46 crc kubenswrapper[4751]: I0130 21:14:46.128287 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41"} Jan 30 21:14:46 crc kubenswrapper[4751]: I0130 21:14:46.158260 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:46 crc kubenswrapper[4751]: I0130 21:14:46.177483 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:46 crc kubenswrapper[4751]: I0130 21:14:46.197545 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:46 crc kubenswrapper[4751]: I0130 21:14:46.216849 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:46 crc kubenswrapper[4751]: I0130 21:14:46.238026 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:46 crc kubenswrapper[4751]: I0130 21:14:46.259513 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:46 crc kubenswrapper[4751]: I0130 21:14:46.280016 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:46 crc kubenswrapper[4751]: I0130 21:14:46.299590 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:46 crc kubenswrapper[4751]: I0130 21:14:46.925785 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 16:54:06.420919355 +0000 UTC Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.892925 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.894735 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.894797 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.894813 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.895263 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.903554 4751 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.903791 4751 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.904942 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.904985 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.905000 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.905018 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.905041 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:47Z","lastTransitionTime":"2026-01-30T21:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.926259 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 03:50:11.511144631 +0000 UTC Jan 30 21:14:47 crc kubenswrapper[4751]: E0130 21:14:47.935435 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:47Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.943739 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.943811 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.943875 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.943970 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.943990 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:47Z","lastTransitionTime":"2026-01-30T21:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:47 crc kubenswrapper[4751]: E0130 21:14:47.960638 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:47Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.966460 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.966728 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.966749 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.966774 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.966795 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:47Z","lastTransitionTime":"2026-01-30T21:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.975344 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.975376 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.975448 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:47 crc kubenswrapper[4751]: E0130 21:14:47.975493 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:14:47 crc kubenswrapper[4751]: E0130 21:14:47.975658 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:14:47 crc kubenswrapper[4751]: E0130 21:14:47.975822 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:14:47 crc kubenswrapper[4751]: E0130 21:14:47.987482 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:47Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.992156 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.992208 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.992225 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.992249 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.992264 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:47Z","lastTransitionTime":"2026-01-30T21:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:48 crc kubenswrapper[4751]: E0130 21:14:48.007355 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:48Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.012366 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.012425 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.012440 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.012463 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.012482 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:48Z","lastTransitionTime":"2026-01-30T21:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:48 crc kubenswrapper[4751]: E0130 21:14:48.030697 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:48Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:48 crc kubenswrapper[4751]: E0130 21:14:48.030859 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.033004 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.033080 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.033099 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.033125 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.033142 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:48Z","lastTransitionTime":"2026-01-30T21:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.135270 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.135347 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.135362 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.135380 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.135392 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:48Z","lastTransitionTime":"2026-01-30T21:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.151787 4751 csr.go:261] certificate signing request csr-9mtls is approved, waiting to be issued Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.170686 4751 csr.go:257] certificate signing request csr-9mtls is issued Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.237470 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.237507 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.237515 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.237531 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.237542 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:48Z","lastTransitionTime":"2026-01-30T21:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.340281 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.340347 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.340359 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.340375 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.340385 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:48Z","lastTransitionTime":"2026-01-30T21:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.443458 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.443498 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.443512 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.443530 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.443542 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:48Z","lastTransitionTime":"2026-01-30T21:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.546235 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.546301 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.546312 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.546345 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.546357 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:48Z","lastTransitionTime":"2026-01-30T21:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.625515 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-xdclq"] Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.625865 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xdclq" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.627660 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.628211 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.628891 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.643721 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:48Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.648389 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.648421 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.648469 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.648486 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.648494 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:48Z","lastTransitionTime":"2026-01-30T21:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.656473 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:48Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.668711 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:48Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.684534 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3e4f7eaf-acd6-4cf5-874c-d88c4e479113-hosts-file\") pod \"node-resolver-xdclq\" (UID: \"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\") " pod="openshift-dns/node-resolver-xdclq" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.684618 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qcg6\" (UniqueName: \"kubernetes.io/projected/3e4f7eaf-acd6-4cf5-874c-d88c4e479113-kube-api-access-6qcg6\") pod \"node-resolver-xdclq\" (UID: \"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\") " pod="openshift-dns/node-resolver-xdclq" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.687509 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:48Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.698991 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:48Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.716492 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:48Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.731627 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:48Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.743233 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:48Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.751401 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.751440 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.751450 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.751464 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.751474 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:48Z","lastTransitionTime":"2026-01-30T21:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.759694 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:48Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.785742 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3e4f7eaf-acd6-4cf5-874c-d88c4e479113-hosts-file\") pod \"node-resolver-xdclq\" (UID: \"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\") " pod="openshift-dns/node-resolver-xdclq" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.785823 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qcg6\" (UniqueName: \"kubernetes.io/projected/3e4f7eaf-acd6-4cf5-874c-d88c4e479113-kube-api-access-6qcg6\") pod \"node-resolver-xdclq\" (UID: \"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\") " pod="openshift-dns/node-resolver-xdclq" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.785901 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3e4f7eaf-acd6-4cf5-874c-d88c4e479113-hosts-file\") pod \"node-resolver-xdclq\" (UID: \"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\") " pod="openshift-dns/node-resolver-xdclq" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.813870 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qcg6\" (UniqueName: \"kubernetes.io/projected/3e4f7eaf-acd6-4cf5-874c-d88c4e479113-kube-api-access-6qcg6\") pod \"node-resolver-xdclq\" (UID: \"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\") " pod="openshift-dns/node-resolver-xdclq" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.853690 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.853727 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.853739 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.853754 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.853764 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:48Z","lastTransitionTime":"2026-01-30T21:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.927311 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 03:09:17.0206572 +0000 UTC Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.939528 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xdclq" Jan 30 21:14:48 crc kubenswrapper[4751]: W0130 21:14:48.954869 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e4f7eaf_acd6_4cf5_874c_d88c4e479113.slice/crio-eb24b3f63c5e086ccba8796160bec62efdd0f3206635c7a349c953c84d64d467 WatchSource:0}: Error finding container eb24b3f63c5e086ccba8796160bec62efdd0f3206635c7a349c953c84d64d467: Status 404 returned error can't find the container with id eb24b3f63c5e086ccba8796160bec62efdd0f3206635c7a349c953c84d64d467 Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.955755 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.955810 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.955827 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.955849 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.955864 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:48Z","lastTransitionTime":"2026-01-30T21:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.000201 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n8bjd"] Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.001653 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-5sgk2"] Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.001843 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.002055 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-xxc7s"] Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.002396 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.003169 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-vgfkp"] Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.003488 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.003777 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.003828 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.004285 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.004987 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.005024 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.004991 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.005546 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.006063 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.006132 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.006143 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.006223 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.006077 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.006398 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.006446 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.006598 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.006649 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.006712 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.006764 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.007508 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.008380 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.027674 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.059678 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.060754 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.060784 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.060793 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.060808 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.060818 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:49Z","lastTransitionTime":"2026-01-30T21:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.071905 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087410 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovnkube-script-lib\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087457 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-multus-socket-dir-parent\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087478 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-multus-conf-dir\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087500 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g67rv\" (UniqueName: \"kubernetes.io/projected/ee35b719-afe2-45cf-8726-00c19502f02f-kube-api-access-g67rv\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087526 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9acdd0f1-560b-4246-b045-c598c5bbb93d-mcd-auth-proxy-config\") pod \"machine-config-daemon-vgfkp\" (UID: \"9acdd0f1-560b-4246-b045-c598c5bbb93d\") " pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087590 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-system-cni-dir\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087627 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-openvswitch\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087647 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-run-k8s-cni-cncf-io\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087663 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-var-lib-cni-bin\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087678 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-var-lib-kubelet\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087694 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-cni-bin\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087707 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-cni-netd\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087729 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ee35b719-afe2-45cf-8726-00c19502f02f-cni-binary-copy\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087750 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-multus-cni-dir\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087764 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-var-lib-cni-multus\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087777 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ee35b719-afe2-45cf-8726-00c19502f02f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087793 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-etc-openvswitch\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087809 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-hostroot\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087826 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ee35b719-afe2-45cf-8726-00c19502f02f-system-cni-dir\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087844 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ee35b719-afe2-45cf-8726-00c19502f02f-cnibin\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087858 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-node-log\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087876 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9acdd0f1-560b-4246-b045-c598c5bbb93d-proxy-tls\") pod \"machine-config-daemon-vgfkp\" (UID: \"9acdd0f1-560b-4246-b045-c598c5bbb93d\") " pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087890 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tfrj\" (UniqueName: \"kubernetes.io/projected/9acdd0f1-560b-4246-b045-c598c5bbb93d-kube-api-access-8tfrj\") pod \"machine-config-daemon-vgfkp\" (UID: \"9acdd0f1-560b-4246-b045-c598c5bbb93d\") " pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087903 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-slash\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087917 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-ovn\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087954 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-etc-kubernetes\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087968 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ee35b719-afe2-45cf-8726-00c19502f02f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087984 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-cni-binary-copy\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087998 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-env-overrides\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088012 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovn-node-metrics-cert\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088036 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9acdd0f1-560b-4246-b045-c598c5bbb93d-rootfs\") pod \"machine-config-daemon-vgfkp\" (UID: \"9acdd0f1-560b-4246-b045-c598c5bbb93d\") " pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088050 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-kubelet\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088065 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-log-socket\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088079 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-multus-daemon-config\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088100 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qx87\" (UniqueName: \"kubernetes.io/projected/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-kube-api-access-2qx87\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088115 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-run-netns\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088127 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ee35b719-afe2-45cf-8726-00c19502f02f-os-release\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088142 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovnkube-config\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088157 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-run-multus-certs\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088169 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-systemd-units\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088183 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-systemd\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088195 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-var-lib-openvswitch\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088209 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-run-netns\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088222 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-run-ovn-kubernetes\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088237 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8497\" (UniqueName: \"kubernetes.io/projected/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-kube-api-access-s8497\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088259 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-cnibin\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088273 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-os-release\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088288 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.094619 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.106323 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.116113 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.130675 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.135588 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xdclq" event={"ID":"3e4f7eaf-acd6-4cf5-874c-d88c4e479113","Type":"ContainerStarted","Data":"eb24b3f63c5e086ccba8796160bec62efdd0f3206635c7a349c953c84d64d467"} Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.154606 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.163562 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.163596 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.163605 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.163619 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.163629 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:49Z","lastTransitionTime":"2026-01-30T21:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.172173 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-30 21:09:48 +0000 UTC, rotation deadline is 2026-11-22 14:03:10.552479534 +0000 UTC Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.172253 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7096h48m21.380230385s for next certificate rotation Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.177346 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.188881 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8497\" (UniqueName: \"kubernetes.io/projected/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-kube-api-access-s8497\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.188934 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-cnibin\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.188955 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-os-release\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.188982 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189002 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovnkube-script-lib\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189024 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-multus-socket-dir-parent\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189044 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-multus-conf-dir\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189063 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g67rv\" (UniqueName: \"kubernetes.io/projected/ee35b719-afe2-45cf-8726-00c19502f02f-kube-api-access-g67rv\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189086 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9acdd0f1-560b-4246-b045-c598c5bbb93d-mcd-auth-proxy-config\") pod \"machine-config-daemon-vgfkp\" (UID: \"9acdd0f1-560b-4246-b045-c598c5bbb93d\") " pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189106 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-system-cni-dir\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189126 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-openvswitch\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189122 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189186 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-os-release\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189226 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-multus-conf-dir\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189203 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-openvswitch\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189151 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-run-k8s-cni-cncf-io\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189223 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-run-k8s-cni-cncf-io\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189268 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-var-lib-cni-bin\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189287 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-var-lib-kubelet\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189301 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-cni-bin\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189316 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-cni-netd\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189351 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ee35b719-afe2-45cf-8726-00c19502f02f-cni-binary-copy\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189300 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-multus-socket-dir-parent\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189368 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-multus-cni-dir\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189373 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-var-lib-kubelet\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189388 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-var-lib-cni-multus\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189411 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-var-lib-cni-multus\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189360 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-system-cni-dir\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189434 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ee35b719-afe2-45cf-8726-00c19502f02f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189463 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-etc-openvswitch\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189469 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-cni-bin\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189435 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-cni-netd\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189487 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-hostroot\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189489 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-var-lib-cni-bin\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189509 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ee35b719-afe2-45cf-8726-00c19502f02f-system-cni-dir\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189534 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ee35b719-afe2-45cf-8726-00c19502f02f-system-cni-dir\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189560 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ee35b719-afe2-45cf-8726-00c19502f02f-cnibin\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189572 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-etc-openvswitch\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189580 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-node-log\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189600 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-multus-cni-dir\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189625 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ee35b719-afe2-45cf-8726-00c19502f02f-cnibin\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189631 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-node-log\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189603 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-hostroot\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189602 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9acdd0f1-560b-4246-b045-c598c5bbb93d-proxy-tls\") pod \"machine-config-daemon-vgfkp\" (UID: \"9acdd0f1-560b-4246-b045-c598c5bbb93d\") " pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189675 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tfrj\" (UniqueName: \"kubernetes.io/projected/9acdd0f1-560b-4246-b045-c598c5bbb93d-kube-api-access-8tfrj\") pod \"machine-config-daemon-vgfkp\" (UID: \"9acdd0f1-560b-4246-b045-c598c5bbb93d\") " pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189696 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-slash\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189716 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-ovn\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189758 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-etc-kubernetes\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189779 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ee35b719-afe2-45cf-8726-00c19502f02f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189804 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-cni-binary-copy\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189826 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-env-overrides\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189832 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-ovn\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189848 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovn-node-metrics-cert\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189861 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-slash\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189883 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9acdd0f1-560b-4246-b045-c598c5bbb93d-rootfs\") pod \"machine-config-daemon-vgfkp\" (UID: \"9acdd0f1-560b-4246-b045-c598c5bbb93d\") " pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189913 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-kubelet\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189971 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-log-socket\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189998 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-multus-daemon-config\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190014 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-etc-kubernetes\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190023 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qx87\" (UniqueName: \"kubernetes.io/projected/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-kube-api-access-2qx87\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190098 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-run-netns\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190128 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ee35b719-afe2-45cf-8726-00c19502f02f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190131 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ee35b719-afe2-45cf-8726-00c19502f02f-os-release\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190155 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9acdd0f1-560b-4246-b045-c598c5bbb93d-mcd-auth-proxy-config\") pod \"machine-config-daemon-vgfkp\" (UID: \"9acdd0f1-560b-4246-b045-c598c5bbb93d\") " pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190178 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ee35b719-afe2-45cf-8726-00c19502f02f-os-release\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190185 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovnkube-config\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190198 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ee35b719-afe2-45cf-8726-00c19502f02f-cni-binary-copy\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190215 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-run-multus-certs\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190216 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9acdd0f1-560b-4246-b045-c598c5bbb93d-rootfs\") pod \"machine-config-daemon-vgfkp\" (UID: \"9acdd0f1-560b-4246-b045-c598c5bbb93d\") " pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190216 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-run-netns\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190244 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-run-multus-certs\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190247 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-kubelet\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190198 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovnkube-script-lib\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190269 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-systemd-units\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190337 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-log-socket\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190497 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ee35b719-afe2-45cf-8726-00c19502f02f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190533 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-systemd-units\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190580 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-systemd\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190643 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-var-lib-openvswitch\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190693 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-systemd\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190726 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-run-netns\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190803 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-run-ovn-kubernetes\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190870 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-run-ovn-kubernetes\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190882 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovnkube-config\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190900 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-cni-binary-copy\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190904 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-env-overrides\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190919 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-var-lib-openvswitch\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190928 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-run-netns\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190943 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-cnibin\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.191410 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-multus-daemon-config\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.195774 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovn-node-metrics-cert\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.196481 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9acdd0f1-560b-4246-b045-c598c5bbb93d-proxy-tls\") pod \"machine-config-daemon-vgfkp\" (UID: \"9acdd0f1-560b-4246-b045-c598c5bbb93d\") " pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.199603 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.212383 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g67rv\" (UniqueName: \"kubernetes.io/projected/ee35b719-afe2-45cf-8726-00c19502f02f-kube-api-access-g67rv\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.216462 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qx87\" (UniqueName: \"kubernetes.io/projected/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-kube-api-access-2qx87\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.217147 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tfrj\" (UniqueName: \"kubernetes.io/projected/9acdd0f1-560b-4246-b045-c598c5bbb93d-kube-api-access-8tfrj\") pod \"machine-config-daemon-vgfkp\" (UID: \"9acdd0f1-560b-4246-b045-c598c5bbb93d\") " pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.220412 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.222305 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8497\" (UniqueName: \"kubernetes.io/projected/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-kube-api-access-s8497\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.243775 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.266298 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.266350 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.266361 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.266377 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.266388 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:49Z","lastTransitionTime":"2026-01-30T21:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.267994 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.297635 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.325240 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.326634 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.335491 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: W0130 21:14:49.336256 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcecdc4b_6607_4e4e_a9b5_49b85c030d21.slice/crio-f52573c687f429b709a9e73424fbcb1cd16f1b9d9776aa0d7c54db2f686c9d7f WatchSource:0}: Error finding container f52573c687f429b709a9e73424fbcb1cd16f1b9d9776aa0d7c54db2f686c9d7f: Status 404 returned error can't find the container with id f52573c687f429b709a9e73424fbcb1cd16f1b9d9776aa0d7c54db2f686c9d7f Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.344964 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: W0130 21:14:49.355105 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b9eb477_4a6d_4f9c_ba41_5b79f5779ffb.slice/crio-58211f7a0c83df8b70ab0c94abdcc3c0824047a0aa11f216a22dea02a287a2b0 WatchSource:0}: Error finding container 58211f7a0c83df8b70ab0c94abdcc3c0824047a0aa11f216a22dea02a287a2b0: Status 404 returned error can't find the container with id 58211f7a0c83df8b70ab0c94abdcc3c0824047a0aa11f216a22dea02a287a2b0 Jan 30 21:14:49 crc kubenswrapper[4751]: W0130 21:14:49.359347 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9acdd0f1_560b_4246_b045_c598c5bbb93d.slice/crio-dcdd2681f89d24331613a907d6b693b2a900e2871cb876bc96ed6eafef9f6424 WatchSource:0}: Error finding container dcdd2681f89d24331613a907d6b693b2a900e2871cb876bc96ed6eafef9f6424: Status 404 returned error can't find the container with id dcdd2681f89d24331613a907d6b693b2a900e2871cb876bc96ed6eafef9f6424 Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.362593 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.371774 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.371818 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.371829 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.371844 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.371856 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:49Z","lastTransitionTime":"2026-01-30T21:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.379715 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.392103 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: W0130 21:14:49.400069 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee35b719_afe2_45cf_8726_00c19502f02f.slice/crio-3bcc38e3386dcff5d6367437a7d2019e5c6ebbb9b1f79e0df81b2cef34d1b95d WatchSource:0}: Error finding container 3bcc38e3386dcff5d6367437a7d2019e5c6ebbb9b1f79e0df81b2cef34d1b95d: Status 404 returned error can't find the container with id 3bcc38e3386dcff5d6367437a7d2019e5c6ebbb9b1f79e0df81b2cef34d1b95d Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.406315 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.419481 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.432634 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.449405 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.461559 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.471837 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.476052 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.476086 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.476097 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.476113 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.476122 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:49Z","lastTransitionTime":"2026-01-30T21:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.577827 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.577867 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.577894 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.577911 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.577922 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:49Z","lastTransitionTime":"2026-01-30T21:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.593959 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.594141 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.594192 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.594236 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.594415 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.594448 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.594464 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.594534 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:57.594503929 +0000 UTC m=+36.340326578 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.595449 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.595537 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:57.595510833 +0000 UTC m=+36.341333482 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.595530 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.595613 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:14:57.595604785 +0000 UTC m=+36.341427424 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.595651 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:57.595622275 +0000 UTC m=+36.341444944 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.680026 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.680077 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.680087 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.680101 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.680111 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:49Z","lastTransitionTime":"2026-01-30T21:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.695841 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.696061 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.696105 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.696121 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.696188 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:57.696170748 +0000 UTC m=+36.441993407 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.782529 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.782574 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.782588 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.782604 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.782615 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:49Z","lastTransitionTime":"2026-01-30T21:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.885379 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.885419 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.885430 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.885448 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.885459 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:49Z","lastTransitionTime":"2026-01-30T21:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.928156 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 03:46:45.464069456 +0000 UTC Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.975705 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.975875 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.976376 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.976477 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.976598 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.976688 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.988268 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.988304 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.988315 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.988369 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.988382 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:49Z","lastTransitionTime":"2026-01-30T21:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.090485 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.090541 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.090549 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.090564 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.090591 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:50Z","lastTransitionTime":"2026-01-30T21:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.145778 4751 generic.go:334] "Generic (PLEG): container finished" podID="ee35b719-afe2-45cf-8726-00c19502f02f" containerID="d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a" exitCode=0 Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.145899 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" event={"ID":"ee35b719-afe2-45cf-8726-00c19502f02f","Type":"ContainerDied","Data":"d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.145986 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" event={"ID":"ee35b719-afe2-45cf-8726-00c19502f02f","Type":"ContainerStarted","Data":"3bcc38e3386dcff5d6367437a7d2019e5c6ebbb9b1f79e0df81b2cef34d1b95d"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.148605 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerID="4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea" exitCode=0 Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.148686 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.148751 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerStarted","Data":"58211f7a0c83df8b70ab0c94abdcc3c0824047a0aa11f216a22dea02a287a2b0"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.151900 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5sgk2" event={"ID":"bcecdc4b-6607-4e4e-a9b5-49b85c030d21","Type":"ContainerStarted","Data":"a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.151945 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5sgk2" event={"ID":"bcecdc4b-6607-4e4e-a9b5-49b85c030d21","Type":"ContainerStarted","Data":"f52573c687f429b709a9e73424fbcb1cd16f1b9d9776aa0d7c54db2f686c9d7f"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.154598 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xdclq" event={"ID":"3e4f7eaf-acd6-4cf5-874c-d88c4e479113","Type":"ContainerStarted","Data":"63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.156800 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.156840 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.156854 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"dcdd2681f89d24331613a907d6b693b2a900e2871cb876bc96ed6eafef9f6424"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.169287 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.190947 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.193741 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.193788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.193804 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.193825 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.193841 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:50Z","lastTransitionTime":"2026-01-30T21:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.208455 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.228090 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.238860 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.252064 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.264438 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.288553 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.296245 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.296277 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.296289 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.296307 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.296321 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:50Z","lastTransitionTime":"2026-01-30T21:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.312973 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.325103 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.343541 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.362068 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.372300 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.393540 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.398123 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.398173 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.398193 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.398216 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.398233 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:50Z","lastTransitionTime":"2026-01-30T21:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.404444 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.418913 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.430810 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.441315 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.453587 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.465136 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.476550 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.487481 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.500318 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.501357 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.501465 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.501530 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.501588 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.501648 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:50Z","lastTransitionTime":"2026-01-30T21:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.510427 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.527826 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.537715 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.605090 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.605142 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.605162 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.605187 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.605200 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:50Z","lastTransitionTime":"2026-01-30T21:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.708124 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.708152 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.708162 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.708175 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.708183 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:50Z","lastTransitionTime":"2026-01-30T21:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.811628 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.811983 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.811997 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.812011 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.812025 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:50Z","lastTransitionTime":"2026-01-30T21:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.914047 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.914086 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.914098 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.914114 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.914126 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:50Z","lastTransitionTime":"2026-01-30T21:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.931384 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 22:53:44.585202264 +0000 UTC Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.015875 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.015912 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.015927 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.015946 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.015958 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:51Z","lastTransitionTime":"2026-01-30T21:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.118345 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.118716 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.118726 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.118742 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.118751 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:51Z","lastTransitionTime":"2026-01-30T21:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.162072 4751 generic.go:334] "Generic (PLEG): container finished" podID="ee35b719-afe2-45cf-8726-00c19502f02f" containerID="aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d" exitCode=0 Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.162150 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" event={"ID":"ee35b719-afe2-45cf-8726-00c19502f02f","Type":"ContainerDied","Data":"aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.167586 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerStarted","Data":"5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.167616 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerStarted","Data":"29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.167626 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerStarted","Data":"661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.167635 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerStarted","Data":"e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.167643 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerStarted","Data":"cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.178157 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.193713 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.209867 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.221467 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.221518 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.221531 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.221549 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.221563 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:51Z","lastTransitionTime":"2026-01-30T21:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.222503 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.237502 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.256845 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.266053 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-h48zj"] Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.267520 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-h48zj" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.269546 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.270456 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.270526 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.270476 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.276285 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.301836 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.309675 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/35ca91c5-bd9e-486b-943d-8123e2f6e84c-host\") pod \"node-ca-h48zj\" (UID: \"35ca91c5-bd9e-486b-943d-8123e2f6e84c\") " pod="openshift-image-registry/node-ca-h48zj" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.309715 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/35ca91c5-bd9e-486b-943d-8123e2f6e84c-serviceca\") pod \"node-ca-h48zj\" (UID: \"35ca91c5-bd9e-486b-943d-8123e2f6e84c\") " pod="openshift-image-registry/node-ca-h48zj" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.309805 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx9j8\" (UniqueName: \"kubernetes.io/projected/35ca91c5-bd9e-486b-943d-8123e2f6e84c-kube-api-access-wx9j8\") pod \"node-ca-h48zj\" (UID: \"35ca91c5-bd9e-486b-943d-8123e2f6e84c\") " pod="openshift-image-registry/node-ca-h48zj" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.312236 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.322754 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.323311 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.323350 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.323363 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.323380 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.323390 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:51Z","lastTransitionTime":"2026-01-30T21:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.333756 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.346846 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.356428 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.367475 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.378014 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.389368 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.400111 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.410475 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/35ca91c5-bd9e-486b-943d-8123e2f6e84c-host\") pod \"node-ca-h48zj\" (UID: \"35ca91c5-bd9e-486b-943d-8123e2f6e84c\") " pod="openshift-image-registry/node-ca-h48zj" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.410514 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/35ca91c5-bd9e-486b-943d-8123e2f6e84c-serviceca\") pod \"node-ca-h48zj\" (UID: \"35ca91c5-bd9e-486b-943d-8123e2f6e84c\") " pod="openshift-image-registry/node-ca-h48zj" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.410555 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx9j8\" (UniqueName: \"kubernetes.io/projected/35ca91c5-bd9e-486b-943d-8123e2f6e84c-kube-api-access-wx9j8\") pod \"node-ca-h48zj\" (UID: \"35ca91c5-bd9e-486b-943d-8123e2f6e84c\") " pod="openshift-image-registry/node-ca-h48zj" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.410597 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/35ca91c5-bd9e-486b-943d-8123e2f6e84c-host\") pod \"node-ca-h48zj\" (UID: \"35ca91c5-bd9e-486b-943d-8123e2f6e84c\") " pod="openshift-image-registry/node-ca-h48zj" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.411587 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/35ca91c5-bd9e-486b-943d-8123e2f6e84c-serviceca\") pod \"node-ca-h48zj\" (UID: \"35ca91c5-bd9e-486b-943d-8123e2f6e84c\") " pod="openshift-image-registry/node-ca-h48zj" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.412574 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.423321 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.425749 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.425793 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.425808 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.425828 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.425842 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:51Z","lastTransitionTime":"2026-01-30T21:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.431671 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx9j8\" (UniqueName: \"kubernetes.io/projected/35ca91c5-bd9e-486b-943d-8123e2f6e84c-kube-api-access-wx9j8\") pod \"node-ca-h48zj\" (UID: \"35ca91c5-bd9e-486b-943d-8123e2f6e84c\") " pod="openshift-image-registry/node-ca-h48zj" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.445856 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.457895 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.468025 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.480476 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.494686 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.504461 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.517663 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.529104 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.529140 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.529154 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.529171 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.529184 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:51Z","lastTransitionTime":"2026-01-30T21:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.531835 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.595130 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-h48zj" Jan 30 21:14:51 crc kubenswrapper[4751]: W0130 21:14:51.610661 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35ca91c5_bd9e_486b_943d_8123e2f6e84c.slice/crio-72b5fca5e2ba2ba03213f1a34b4c22f465e7f3ceed5749c155ef5736f1953d1e WatchSource:0}: Error finding container 72b5fca5e2ba2ba03213f1a34b4c22f465e7f3ceed5749c155ef5736f1953d1e: Status 404 returned error can't find the container with id 72b5fca5e2ba2ba03213f1a34b4c22f465e7f3ceed5749c155ef5736f1953d1e Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.632050 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.632105 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.632121 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.632142 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.632157 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:51Z","lastTransitionTime":"2026-01-30T21:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.735171 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.735202 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.735211 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.735223 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.735233 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:51Z","lastTransitionTime":"2026-01-30T21:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.747593 4751 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 21:14:51 crc kubenswrapper[4751]: W0130 21:14:51.748885 4751 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 30 21:14:51 crc kubenswrapper[4751]: W0130 21:14:51.748926 4751 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 30 21:14:51 crc kubenswrapper[4751]: W0130 21:14:51.749637 4751 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Jan 30 21:14:51 crc kubenswrapper[4751]: W0130 21:14:51.749785 4751 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.846507 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.846549 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.846559 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.846577 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.846588 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:51Z","lastTransitionTime":"2026-01-30T21:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.931619 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 00:23:25.723768978 +0000 UTC Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.949223 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.949258 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.949268 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.949284 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.949296 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:51Z","lastTransitionTime":"2026-01-30T21:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.974728 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.974738 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:51 crc kubenswrapper[4751]: E0130 21:14:51.974823 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.974728 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:51 crc kubenswrapper[4751]: E0130 21:14:51.974895 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:14:51 crc kubenswrapper[4751]: E0130 21:14:51.975030 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.988124 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.000357 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.016091 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.031786 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.043826 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.051041 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.051086 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.051098 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.051116 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.051127 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:52Z","lastTransitionTime":"2026-01-30T21:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.062493 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.079751 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.089813 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.109060 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.121245 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.131432 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.144002 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.153606 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.153669 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.153688 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.153713 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.153733 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:52Z","lastTransitionTime":"2026-01-30T21:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.156956 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.175228 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.175448 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerStarted","Data":"fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568"} Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.177086 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-h48zj" event={"ID":"35ca91c5-bd9e-486b-943d-8123e2f6e84c","Type":"ContainerStarted","Data":"7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b"} Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.177142 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-h48zj" event={"ID":"35ca91c5-bd9e-486b-943d-8123e2f6e84c","Type":"ContainerStarted","Data":"72b5fca5e2ba2ba03213f1a34b4c22f465e7f3ceed5749c155ef5736f1953d1e"} Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.179277 4751 generic.go:334] "Generic (PLEG): container finished" podID="ee35b719-afe2-45cf-8726-00c19502f02f" containerID="876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d" exitCode=0 Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.179315 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" event={"ID":"ee35b719-afe2-45cf-8726-00c19502f02f","Type":"ContainerDied","Data":"876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d"} Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.194397 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.205186 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.231822 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.245089 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.256195 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.256370 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.256464 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.256560 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.256705 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:52Z","lastTransitionTime":"2026-01-30T21:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.259101 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.269979 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.283675 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.296365 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.317513 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.335820 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.349035 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.359608 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.359640 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.359658 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.359672 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.359681 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:52Z","lastTransitionTime":"2026-01-30T21:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.363496 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.379762 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.396174 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.407847 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.417987 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.434990 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.448078 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.457907 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.461431 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.461464 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.461473 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.461488 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.461497 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:52Z","lastTransitionTime":"2026-01-30T21:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.469774 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.482958 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.495902 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.510760 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.526868 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.539915 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.551842 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.562685 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.564595 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.564647 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.564665 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.564688 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.564705 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:52Z","lastTransitionTime":"2026-01-30T21:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.582308 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.621432 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.667762 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.667819 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.667831 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.667851 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.667866 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:52Z","lastTransitionTime":"2026-01-30T21:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.770838 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.770888 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.770903 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.770922 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.770933 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:52Z","lastTransitionTime":"2026-01-30T21:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.874215 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.874272 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.874290 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.874314 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.874362 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:52Z","lastTransitionTime":"2026-01-30T21:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.932617 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 03:43:24.098303272 +0000 UTC Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.937449 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.959689 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.977318 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.977361 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.977373 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.977385 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.977394 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:52Z","lastTransitionTime":"2026-01-30T21:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.080028 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.080082 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.080100 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.080123 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.080144 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:53Z","lastTransitionTime":"2026-01-30T21:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.182638 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.182707 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.182731 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.182761 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.182783 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:53Z","lastTransitionTime":"2026-01-30T21:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.189069 4751 generic.go:334] "Generic (PLEG): container finished" podID="ee35b719-afe2-45cf-8726-00c19502f02f" containerID="9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6" exitCode=0 Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.189150 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" event={"ID":"ee35b719-afe2-45cf-8726-00c19502f02f","Type":"ContainerDied","Data":"9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6"} Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.216499 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.237079 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.251108 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.269023 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.281717 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.282700 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.285739 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.285789 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.285806 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.285849 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.285867 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:53Z","lastTransitionTime":"2026-01-30T21:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.296943 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.332654 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.349644 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.366777 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.387643 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.387924 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.387933 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.387947 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.387567 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.387955 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:53Z","lastTransitionTime":"2026-01-30T21:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.401852 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.417528 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.437986 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.448835 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.490528 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.490563 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.490571 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.490585 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.490602 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:53Z","lastTransitionTime":"2026-01-30T21:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.593572 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.593631 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.593649 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.593676 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.593694 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:53Z","lastTransitionTime":"2026-01-30T21:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.696461 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.696528 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.696545 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.696569 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.696587 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:53Z","lastTransitionTime":"2026-01-30T21:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.799208 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.799261 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.799281 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.799306 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.799361 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:53Z","lastTransitionTime":"2026-01-30T21:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.902443 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.902501 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.902518 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.902540 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.902557 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:53Z","lastTransitionTime":"2026-01-30T21:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.932804 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 10:38:49.766542375 +0000 UTC Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.975622 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:53 crc kubenswrapper[4751]: E0130 21:14:53.975822 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.975851 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:53 crc kubenswrapper[4751]: E0130 21:14:53.976004 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.975622 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:53 crc kubenswrapper[4751]: E0130 21:14:53.976148 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.005547 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.005612 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.005635 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.005666 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.005690 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:54Z","lastTransitionTime":"2026-01-30T21:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.108491 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.108565 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.108590 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.108620 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.108647 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:54Z","lastTransitionTime":"2026-01-30T21:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.203307 4751 generic.go:334] "Generic (PLEG): container finished" podID="ee35b719-afe2-45cf-8726-00c19502f02f" containerID="73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6" exitCode=0 Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.203389 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" event={"ID":"ee35b719-afe2-45cf-8726-00c19502f02f","Type":"ContainerDied","Data":"73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6"} Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.211996 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.212062 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.212085 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.212115 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.212138 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:54Z","lastTransitionTime":"2026-01-30T21:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.213621 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerStarted","Data":"a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4"} Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.237971 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.253351 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.268078 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.285254 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.301601 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.314742 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.314777 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.314791 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.314808 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.314822 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:54Z","lastTransitionTime":"2026-01-30T21:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.319274 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.333257 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.349700 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.364745 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.380955 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.397803 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.414015 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.417439 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.417485 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.417497 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.417515 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.417528 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:54Z","lastTransitionTime":"2026-01-30T21:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.430662 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.442642 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.520575 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.520634 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.520653 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.520685 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.520705 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:54Z","lastTransitionTime":"2026-01-30T21:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.623717 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.623774 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.623790 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.623816 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.623834 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:54Z","lastTransitionTime":"2026-01-30T21:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.726051 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.726129 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.726154 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.726185 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.726207 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:54Z","lastTransitionTime":"2026-01-30T21:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.829936 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.829987 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.830003 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.830021 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.830033 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:54Z","lastTransitionTime":"2026-01-30T21:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.932523 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.932588 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.932606 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.932633 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.932654 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:54Z","lastTransitionTime":"2026-01-30T21:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.933350 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:44:53.631722045 +0000 UTC Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.036659 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.036742 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.036753 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.036769 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.036780 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:55Z","lastTransitionTime":"2026-01-30T21:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.139585 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.139650 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.139667 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.139693 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.139711 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:55Z","lastTransitionTime":"2026-01-30T21:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.223708 4751 generic.go:334] "Generic (PLEG): container finished" podID="ee35b719-afe2-45cf-8726-00c19502f02f" containerID="483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed" exitCode=0 Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.223760 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" event={"ID":"ee35b719-afe2-45cf-8726-00c19502f02f","Type":"ContainerDied","Data":"483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed"} Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.243814 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.243875 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.243897 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.243926 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.243945 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:55Z","lastTransitionTime":"2026-01-30T21:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.247245 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.267067 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.284717 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.302129 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.317642 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.330153 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.347081 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.347123 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.347159 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.347176 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.347187 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:55Z","lastTransitionTime":"2026-01-30T21:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.351230 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.366586 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.377105 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.390739 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.402510 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.414724 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.428231 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.447179 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.449956 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.450012 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.450031 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.450056 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.450074 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:55Z","lastTransitionTime":"2026-01-30T21:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.551850 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.551881 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.551889 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.551902 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.551911 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:55Z","lastTransitionTime":"2026-01-30T21:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.653775 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.653816 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.653826 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.653840 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.653849 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:55Z","lastTransitionTime":"2026-01-30T21:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.755440 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.755474 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.755487 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.755505 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.755518 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:55Z","lastTransitionTime":"2026-01-30T21:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.857792 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.857857 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.857875 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.857900 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.857918 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:55Z","lastTransitionTime":"2026-01-30T21:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.933662 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 12:26:59.878073919 +0000 UTC Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.960016 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.960042 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.960050 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.960063 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.960082 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:55Z","lastTransitionTime":"2026-01-30T21:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.975476 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.975511 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:55 crc kubenswrapper[4751]: E0130 21:14:55.975633 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.975679 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:55 crc kubenswrapper[4751]: E0130 21:14:55.975757 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:14:55 crc kubenswrapper[4751]: E0130 21:14:55.975857 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.066585 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.066627 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.066666 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.066687 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.066708 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:56Z","lastTransitionTime":"2026-01-30T21:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.169172 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.169239 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.169263 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.169307 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.169366 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:56Z","lastTransitionTime":"2026-01-30T21:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.232242 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerStarted","Data":"3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b"} Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.232730 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.239686 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" event={"ID":"ee35b719-afe2-45cf-8726-00c19502f02f","Type":"ContainerStarted","Data":"a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c"} Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.265585 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.272267 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.272310 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.272345 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.272361 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.272373 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:56Z","lastTransitionTime":"2026-01-30T21:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.285100 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.325161 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.325522 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.346271 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.375259 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.375309 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.375348 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.375372 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.375389 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:56Z","lastTransitionTime":"2026-01-30T21:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.397175 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.412071 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.428252 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.453652 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.470719 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.478666 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.478719 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.478733 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.478767 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.478782 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:56Z","lastTransitionTime":"2026-01-30T21:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.482743 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.511403 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.536358 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.551527 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.568665 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.582291 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.582413 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.582432 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.582465 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.582484 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:56Z","lastTransitionTime":"2026-01-30T21:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.587588 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.604922 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.621438 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.645851 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.661627 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.676676 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.685459 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.685493 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.685504 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.685521 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.685532 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:56Z","lastTransitionTime":"2026-01-30T21:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.699394 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.719471 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.738616 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.758820 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.780770 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.787979 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.788008 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.788016 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.788032 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.788043 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:56Z","lastTransitionTime":"2026-01-30T21:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.796518 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.812263 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.830152 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.890300 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.890830 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.891134 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.891374 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.891594 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:56Z","lastTransitionTime":"2026-01-30T21:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.934744 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 12:26:15.661475227 +0000 UTC Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.994056 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.994108 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.994120 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.994136 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.994151 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:56Z","lastTransitionTime":"2026-01-30T21:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.097424 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.097484 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.097501 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.097525 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.097542 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:57Z","lastTransitionTime":"2026-01-30T21:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.200414 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.200504 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.200527 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.200952 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.201247 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:57Z","lastTransitionTime":"2026-01-30T21:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.243477 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.244144 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.278011 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.304545 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.304608 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.304631 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.304661 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.304682 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:57Z","lastTransitionTime":"2026-01-30T21:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.308264 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.326886 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.354663 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.376915 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.396507 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.407286 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.407549 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.407822 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.407970 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.408091 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:57Z","lastTransitionTime":"2026-01-30T21:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.415966 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.434783 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.453801 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.469164 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.487985 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.508242 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.515480 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.515536 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.515571 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.515601 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.515621 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:57Z","lastTransitionTime":"2026-01-30T21:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.532423 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.547408 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.579009 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.620198 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.620294 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.620312 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.620422 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.620511 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:57Z","lastTransitionTime":"2026-01-30T21:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.675560 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.675701 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.675763 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.675808 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.676005 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.676032 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.676277 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.676302 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.676409 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:15:13.676372682 +0000 UTC m=+52.422195371 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.676472 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:13.676444723 +0000 UTC m=+52.422267412 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.676507 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:13.676493256 +0000 UTC m=+52.422315935 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.677424 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.677639 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:13.67756604 +0000 UTC m=+52.423388749 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.725596 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.725673 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.725692 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.725716 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.725735 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:57Z","lastTransitionTime":"2026-01-30T21:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.776426 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.776862 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.776923 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.776944 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.777021 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:13.776996526 +0000 UTC m=+52.522819215 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.828608 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.828665 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.828688 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.828788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.828860 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:57Z","lastTransitionTime":"2026-01-30T21:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.932799 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.932849 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.932866 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.932888 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.932905 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:57Z","lastTransitionTime":"2026-01-30T21:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.936055 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 07:18:45.856778915 +0000 UTC Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.975371 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.975546 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.976052 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.976155 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.977026 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.977297 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.041162 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.041253 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.041272 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.041296 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.041313 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.099696 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.099751 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.099768 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.099793 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.099809 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:58 crc kubenswrapper[4751]: E0130 21:14:58.119959 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:58Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.124569 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.124635 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.124653 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.124680 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.124702 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:58 crc kubenswrapper[4751]: E0130 21:14:58.154829 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:58Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.160660 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.160701 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.160713 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.160732 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.160747 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:58 crc kubenswrapper[4751]: E0130 21:14:58.180314 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:58Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.183954 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.183989 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.184001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.184019 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.184031 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:58 crc kubenswrapper[4751]: E0130 21:14:58.203753 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:58Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.207392 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.207439 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.207451 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.207490 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.207503 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:58 crc kubenswrapper[4751]: E0130 21:14:58.220015 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:58Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:58 crc kubenswrapper[4751]: E0130 21:14:58.220216 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.222374 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.222413 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.222425 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.222444 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.222456 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.246469 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.325562 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.325663 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.325681 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.325739 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.325758 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.428702 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.428789 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.428813 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.428845 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.428868 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.531452 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.531510 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.531529 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.531553 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.531570 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.634899 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.634992 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.635019 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.635056 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.635095 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.738711 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.738746 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.738754 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.738769 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.738778 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.842164 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.842212 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.842228 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.842250 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.842266 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.937379 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 19:50:12.98371259 +0000 UTC Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.944653 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.944717 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.944737 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.944762 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.944780 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.048094 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.048151 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.048169 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.048195 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.048214 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:59Z","lastTransitionTime":"2026-01-30T21:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.150649 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.150711 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.150736 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.150764 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.150786 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:59Z","lastTransitionTime":"2026-01-30T21:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.249430 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.252921 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.252982 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.253001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.253024 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.253044 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:59Z","lastTransitionTime":"2026-01-30T21:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.282727 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.302913 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.322062 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.343650 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.355938 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.356220 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.356433 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.356616 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.356770 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:59Z","lastTransitionTime":"2026-01-30T21:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.367686 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.387030 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.407089 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.426637 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.446088 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.460810 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.460874 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.460908 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.460931 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.460951 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:59Z","lastTransitionTime":"2026-01-30T21:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.462099 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.495435 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.513446 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.535051 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.549402 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.563554 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.563913 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.564069 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.564218 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.564412 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:59Z","lastTransitionTime":"2026-01-30T21:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.566021 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.666998 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.667058 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.667073 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.667096 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.667113 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:59Z","lastTransitionTime":"2026-01-30T21:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.770390 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.770479 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.770502 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.770532 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.770556 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:59Z","lastTransitionTime":"2026-01-30T21:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.874090 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.874145 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.874162 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.874184 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.874201 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:59Z","lastTransitionTime":"2026-01-30T21:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.938012 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 15:00:10.877226964 +0000 UTC Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.974970 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.975024 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.974971 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:59 crc kubenswrapper[4751]: E0130 21:14:59.975199 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:14:59 crc kubenswrapper[4751]: E0130 21:14:59.975408 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:14:59 crc kubenswrapper[4751]: E0130 21:14:59.975547 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.976845 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.976892 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.976910 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.976972 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.976994 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:59Z","lastTransitionTime":"2026-01-30T21:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.079687 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.079760 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.079779 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.079803 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.079821 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:00Z","lastTransitionTime":"2026-01-30T21:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.183172 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.183436 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.183544 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.183673 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.183786 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:00Z","lastTransitionTime":"2026-01-30T21:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.254029 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/0.log" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.257075 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerID="3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b" exitCode=1 Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.257311 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b"} Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.258228 4751 scope.go:117] "RemoveContainer" containerID="3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.284937 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.286106 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.286139 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.286152 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.286170 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.286183 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:00Z","lastTransitionTime":"2026-01-30T21:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.300526 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.317489 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.329847 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.340923 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.353345 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.365305 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.382202 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.389018 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.389056 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.389067 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.389084 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.389095 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:00Z","lastTransitionTime":"2026-01-30T21:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.397846 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.413838 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.425131 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.440751 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"message\\\":\\\"vent handler 7 for removal\\\\nI0130 21:14:58.875209 6139 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:14:58.875241 6139 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 21:14:58.875395 6139 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 21:14:58.875399 6139 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 21:14:58.875543 6139 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:14:58.875664 6139 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 21:14:58.875730 6139 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:14:58.875776 6139 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 21:14:58.875836 6139 factory.go:656] Stopping watch factory\\\\nI0130 21:14:58.875896 6139 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:14:58.875948 6139 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:14:58.875750 6139 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:14:58.876008 6139 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:14:58.876020 6139 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.454187 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.463105 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.491547 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.491575 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.491583 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.491595 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.491603 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:00Z","lastTransitionTime":"2026-01-30T21:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.594459 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.594711 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.594877 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.595060 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.595210 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:00Z","lastTransitionTime":"2026-01-30T21:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.698735 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.698806 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.698828 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.698860 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.698883 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:00Z","lastTransitionTime":"2026-01-30T21:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.801971 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.802027 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.802044 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.802066 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.802084 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:00Z","lastTransitionTime":"2026-01-30T21:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.904943 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.905010 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.905030 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.905058 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.905074 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:00Z","lastTransitionTime":"2026-01-30T21:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.939198 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 06:19:22.409344606 +0000 UTC Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.007625 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.007678 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.007697 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.007720 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.007736 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:01Z","lastTransitionTime":"2026-01-30T21:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.111425 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.111493 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.111510 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.111535 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.111552 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:01Z","lastTransitionTime":"2026-01-30T21:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.214751 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.214810 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.214834 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.214865 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.214884 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:01Z","lastTransitionTime":"2026-01-30T21:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.265915 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/0.log" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.270951 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerStarted","Data":"12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f"} Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.271128 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.306199 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"message\\\":\\\"vent handler 7 for removal\\\\nI0130 21:14:58.875209 6139 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:14:58.875241 6139 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 21:14:58.875395 6139 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 21:14:58.875399 6139 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 21:14:58.875543 6139 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:14:58.875664 6139 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 21:14:58.875730 6139 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:14:58.875776 6139 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 21:14:58.875836 6139 factory.go:656] Stopping watch factory\\\\nI0130 21:14:58.875896 6139 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:14:58.875948 6139 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:14:58.875750 6139 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:14:58.876008 6139 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:14:58.876020 6139 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.318999 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.319062 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.319084 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.319114 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.319136 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:01Z","lastTransitionTime":"2026-01-30T21:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.323491 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.377389 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.398212 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.419658 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.421872 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.422213 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.422485 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.422738 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.422925 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:01Z","lastTransitionTime":"2026-01-30T21:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.441756 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.457196 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.480724 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.499772 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.519038 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.525726 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.525950 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.526080 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.526204 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.526339 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:01Z","lastTransitionTime":"2026-01-30T21:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.540565 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.558128 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.577679 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.595769 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.629133 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.629190 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.629200 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.629217 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.629228 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:01Z","lastTransitionTime":"2026-01-30T21:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.731653 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.731695 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.731710 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.731729 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.731742 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:01Z","lastTransitionTime":"2026-01-30T21:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.834910 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.834970 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.834985 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.835015 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.835029 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:01Z","lastTransitionTime":"2026-01-30T21:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.881800 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8"] Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.882195 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.884598 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.886627 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.899505 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.912588 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.923658 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.935119 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.936763 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.936794 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.936807 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.936825 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.936837 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:01Z","lastTransitionTime":"2026-01-30T21:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.939506 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 10:43:41.14528966 +0000 UTC Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.951662 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.965058 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.975109 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:01 crc kubenswrapper[4751]: E0130 21:15:01.975449 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.975154 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:01 crc kubenswrapper[4751]: E0130 21:15:01.975719 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.975109 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:01 crc kubenswrapper[4751]: E0130 21:15:01.976194 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.976916 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.986838 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.986939 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8h2z8\" (UID: \"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.986964 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8h2z8\" (UID: \"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.986987 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8h2z8\" (UID: \"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.987034 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccttm\" (UniqueName: \"kubernetes.io/projected/f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4-kube-api-access-ccttm\") pod \"ovnkube-control-plane-749d76644c-8h2z8\" (UID: \"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.006849 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"message\\\":\\\"vent handler 7 for removal\\\\nI0130 21:14:58.875209 6139 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:14:58.875241 6139 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 21:14:58.875395 6139 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 21:14:58.875399 6139 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 21:14:58.875543 6139 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:14:58.875664 6139 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 21:14:58.875730 6139 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:14:58.875776 6139 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 21:14:58.875836 6139 factory.go:656] Stopping watch factory\\\\nI0130 21:14:58.875896 6139 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:14:58.875948 6139 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:14:58.875750 6139 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:14:58.876008 6139 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:14:58.876020 6139 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.021542 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.030959 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.038748 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.038779 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.038790 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.038807 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.038819 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:02Z","lastTransitionTime":"2026-01-30T21:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.042462 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.054593 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.071187 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.087581 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8h2z8\" (UID: \"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.087635 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8h2z8\" (UID: \"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.087671 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8h2z8\" (UID: \"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.087729 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccttm\" (UniqueName: \"kubernetes.io/projected/f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4-kube-api-access-ccttm\") pod \"ovnkube-control-plane-749d76644c-8h2z8\" (UID: \"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.088289 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8h2z8\" (UID: \"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.088926 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8h2z8\" (UID: \"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.089254 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.093480 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8h2z8\" (UID: \"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.102686 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.111254 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccttm\" (UniqueName: \"kubernetes.io/projected/f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4-kube-api-access-ccttm\") pod \"ovnkube-control-plane-749d76644c-8h2z8\" (UID: \"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.120068 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.131555 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.141055 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.141097 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.141112 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.141139 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.141174 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:02Z","lastTransitionTime":"2026-01-30T21:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.141665 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.150918 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.159674 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.169778 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.179254 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.187160 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.197990 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.203713 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"message\\\":\\\"vent handler 7 for removal\\\\nI0130 21:14:58.875209 6139 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:14:58.875241 6139 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 21:14:58.875395 6139 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 21:14:58.875399 6139 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 21:14:58.875543 6139 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:14:58.875664 6139 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 21:14:58.875730 6139 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:14:58.875776 6139 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 21:14:58.875836 6139 factory.go:656] Stopping watch factory\\\\nI0130 21:14:58.875896 6139 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:14:58.875948 6139 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:14:58.875750 6139 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:14:58.876008 6139 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:14:58.876020 6139 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: W0130 21:15:02.214159 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6ffb2e7_2c69_43bb_84cd_821c1dffd7d4.slice/crio-67ece8c9b14d9ad1f82c2b49fc44ec09ae56fca68fc40f980def0a57c36b5016 WatchSource:0}: Error finding container 67ece8c9b14d9ad1f82c2b49fc44ec09ae56fca68fc40f980def0a57c36b5016: Status 404 returned error can't find the container with id 67ece8c9b14d9ad1f82c2b49fc44ec09ae56fca68fc40f980def0a57c36b5016 Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.215925 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.225231 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.236513 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.243295 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.243350 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.243360 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.243374 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.243384 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:02Z","lastTransitionTime":"2026-01-30T21:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.247808 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.259899 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.275361 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" event={"ID":"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4","Type":"ContainerStarted","Data":"67ece8c9b14d9ad1f82c2b49fc44ec09ae56fca68fc40f980def0a57c36b5016"} Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.277370 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/1.log" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.277854 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/0.log" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.282768 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerID="12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f" exitCode=1 Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.282798 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f"} Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.282828 4751 scope.go:117] "RemoveContainer" containerID="3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.283511 4751 scope.go:117] "RemoveContainer" containerID="12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f" Jan 30 21:15:02 crc kubenswrapper[4751]: E0130 21:15:02.283652 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.297412 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.314173 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"message\\\":\\\"vent handler 7 for removal\\\\nI0130 21:14:58.875209 6139 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:14:58.875241 6139 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 21:14:58.875395 6139 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 21:14:58.875399 6139 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 21:14:58.875543 6139 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:14:58.875664 6139 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 21:14:58.875730 6139 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:14:58.875776 6139 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 21:14:58.875836 6139 factory.go:656] Stopping watch factory\\\\nI0130 21:14:58.875896 6139 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:14:58.875948 6139 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:14:58.875750 6139 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:14:58.876008 6139 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:14:58.876020 6139 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:02Z\\\",\\\"message\\\":\\\"enshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI0130 21:15:02.132224 6280 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 7.181037ms\\\\nI0130 21:15:02.132235 6280 services_controller.go:356] Processing sync for service openshift-kube-scheduler/scheduler for network=default\\\\nI0130 21:15:02.132237 6280 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI0130 21:15:02.132271 6280 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 4.894454ms\\\\nI0130 21:15:02.132289 6280 services_controller.go:356] Processing sync for service openshift-kube-storage-version-migrator-operator/metrics for network=default\\\\nI0130 21:15:02.132281 6280 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-5sgk2\\\\nI0130 21:15:02.132313 6280 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-5sgk2\\\\nF0130 21:15:02.132347 6280 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.325252 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.336243 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.345182 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.345207 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.345215 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.345228 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.345236 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:02Z","lastTransitionTime":"2026-01-30T21:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.347358 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.361869 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.374112 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.386679 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.399270 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.412312 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.424098 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.437967 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.447249 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.447359 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.447378 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.447396 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.447405 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:02Z","lastTransitionTime":"2026-01-30T21:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.452819 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.463828 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.477437 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.550238 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.550307 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.550341 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.550374 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.550392 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:02Z","lastTransitionTime":"2026-01-30T21:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.653211 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.653260 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.653272 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.653290 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.653304 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:02Z","lastTransitionTime":"2026-01-30T21:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.756361 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.756401 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.756410 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.756424 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.756433 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:02Z","lastTransitionTime":"2026-01-30T21:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.858975 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.859023 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.859040 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.859064 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.859080 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:02Z","lastTransitionTime":"2026-01-30T21:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.939928 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 09:56:20.253169735 +0000 UTC Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.961635 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.961694 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.961726 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.961753 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.961770 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:02Z","lastTransitionTime":"2026-01-30T21:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.012222 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-c477w"] Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.012920 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:03 crc kubenswrapper[4751]: E0130 21:15:03.013017 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.032505 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.068187 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.068258 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.068281 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.068365 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.068391 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:03Z","lastTransitionTime":"2026-01-30T21:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.069375 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.082970 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.098813 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zds78\" (UniqueName: \"kubernetes.io/projected/3c30a687-0b58-4a63-b9e3-3a3624676358-kube-api-access-zds78\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.099140 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.114738 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"message\\\":\\\"vent handler 7 for removal\\\\nI0130 21:14:58.875209 6139 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:14:58.875241 6139 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 21:14:58.875395 6139 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 21:14:58.875399 6139 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 21:14:58.875543 6139 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:14:58.875664 6139 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 21:14:58.875730 6139 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:14:58.875776 6139 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 21:14:58.875836 6139 factory.go:656] Stopping watch factory\\\\nI0130 21:14:58.875896 6139 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:14:58.875948 6139 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:14:58.875750 6139 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:14:58.876008 6139 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:14:58.876020 6139 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:02Z\\\",\\\"message\\\":\\\"enshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI0130 21:15:02.132224 6280 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 7.181037ms\\\\nI0130 21:15:02.132235 6280 services_controller.go:356] Processing sync for service openshift-kube-scheduler/scheduler for network=default\\\\nI0130 21:15:02.132237 6280 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI0130 21:15:02.132271 6280 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 4.894454ms\\\\nI0130 21:15:02.132289 6280 services_controller.go:356] Processing sync for service openshift-kube-storage-version-migrator-operator/metrics for network=default\\\\nI0130 21:15:02.132281 6280 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-5sgk2\\\\nI0130 21:15:02.132313 6280 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-5sgk2\\\\nF0130 21:15:02.132347 6280 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.136436 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.154266 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.174134 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.174215 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.174235 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.174309 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.174855 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:03Z","lastTransitionTime":"2026-01-30T21:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.177541 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.200588 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.201108 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.201252 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zds78\" (UniqueName: \"kubernetes.io/projected/3c30a687-0b58-4a63-b9e3-3a3624676358-kube-api-access-zds78\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:03 crc kubenswrapper[4751]: E0130 21:15:03.201391 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:03 crc kubenswrapper[4751]: E0130 21:15:03.201496 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs podName:3c30a687-0b58-4a63-b9e3-3a3624676358 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:03.701472238 +0000 UTC m=+42.447294887 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs") pod "network-metrics-daemon-c477w" (UID: "3c30a687-0b58-4a63-b9e3-3a3624676358") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.220704 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.230043 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zds78\" (UniqueName: \"kubernetes.io/projected/3c30a687-0b58-4a63-b9e3-3a3624676358-kube-api-access-zds78\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.235413 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.255195 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.268818 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.280515 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.280572 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.280593 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.280635 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.280660 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:03Z","lastTransitionTime":"2026-01-30T21:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.288538 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.292493 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/1.log" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.302215 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" event={"ID":"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4","Type":"ContainerStarted","Data":"df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d"} Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.302299 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" event={"ID":"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4","Type":"ContainerStarted","Data":"3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d"} Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.308429 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.327776 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.340100 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.351860 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.363293 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.376667 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.383490 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.383541 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.383557 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.383615 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.383634 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:03Z","lastTransitionTime":"2026-01-30T21:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.386972 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.404610 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.420203 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.434041 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.450152 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.463110 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.478654 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.486795 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.486859 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.486895 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.486917 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.486932 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:03Z","lastTransitionTime":"2026-01-30T21:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.494798 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.513189 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.532541 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.550966 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.563245 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.588428 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"message\\\":\\\"vent handler 7 for removal\\\\nI0130 21:14:58.875209 6139 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:14:58.875241 6139 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 21:14:58.875395 6139 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 21:14:58.875399 6139 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 21:14:58.875543 6139 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:14:58.875664 6139 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 21:14:58.875730 6139 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:14:58.875776 6139 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 21:14:58.875836 6139 factory.go:656] Stopping watch factory\\\\nI0130 21:14:58.875896 6139 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:14:58.875948 6139 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:14:58.875750 6139 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:14:58.876008 6139 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:14:58.876020 6139 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:02Z\\\",\\\"message\\\":\\\"enshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI0130 21:15:02.132224 6280 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 7.181037ms\\\\nI0130 21:15:02.132235 6280 services_controller.go:356] Processing sync for service openshift-kube-scheduler/scheduler for network=default\\\\nI0130 21:15:02.132237 6280 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI0130 21:15:02.132271 6280 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 4.894454ms\\\\nI0130 21:15:02.132289 6280 services_controller.go:356] Processing sync for service openshift-kube-storage-version-migrator-operator/metrics for network=default\\\\nI0130 21:15:02.132281 6280 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-5sgk2\\\\nI0130 21:15:02.132313 6280 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-5sgk2\\\\nF0130 21:15:02.132347 6280 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.589133 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.589191 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.589209 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.589235 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.589253 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:03Z","lastTransitionTime":"2026-01-30T21:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.691832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.691875 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.691886 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.691906 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.691917 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:03Z","lastTransitionTime":"2026-01-30T21:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.707415 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:03 crc kubenswrapper[4751]: E0130 21:15:03.707541 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:03 crc kubenswrapper[4751]: E0130 21:15:03.707594 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs podName:3c30a687-0b58-4a63-b9e3-3a3624676358 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:04.70757922 +0000 UTC m=+43.453401869 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs") pod "network-metrics-daemon-c477w" (UID: "3c30a687-0b58-4a63-b9e3-3a3624676358") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.794685 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.794731 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.794744 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.794761 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.794775 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:03Z","lastTransitionTime":"2026-01-30T21:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.898168 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.898250 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.898279 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.898313 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.898370 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:03Z","lastTransitionTime":"2026-01-30T21:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.941014 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 16:56:10.160385303 +0000 UTC Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.975610 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.975653 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.975628 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:03 crc kubenswrapper[4751]: E0130 21:15:03.975790 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:03 crc kubenswrapper[4751]: E0130 21:15:03.976011 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:03 crc kubenswrapper[4751]: E0130 21:15:03.976159 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.001062 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.001095 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.001104 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.001137 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.001148 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:04Z","lastTransitionTime":"2026-01-30T21:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.104136 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.104171 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.104183 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.104201 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.104212 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:04Z","lastTransitionTime":"2026-01-30T21:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.208041 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.208089 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.208110 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.208137 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.208159 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:04Z","lastTransitionTime":"2026-01-30T21:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.311045 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.311100 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.311118 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.311160 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.311184 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:04Z","lastTransitionTime":"2026-01-30T21:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.413822 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.413902 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.413931 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.413962 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.413981 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:04Z","lastTransitionTime":"2026-01-30T21:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.518296 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.518388 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.518407 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.518429 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.518446 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:04Z","lastTransitionTime":"2026-01-30T21:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.621529 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.621569 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.621578 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.621592 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.621601 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:04Z","lastTransitionTime":"2026-01-30T21:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.718874 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:04 crc kubenswrapper[4751]: E0130 21:15:04.719345 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:04 crc kubenswrapper[4751]: E0130 21:15:04.719531 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs podName:3c30a687-0b58-4a63-b9e3-3a3624676358 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:06.719506339 +0000 UTC m=+45.465328998 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs") pod "network-metrics-daemon-c477w" (UID: "3c30a687-0b58-4a63-b9e3-3a3624676358") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.724366 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.724395 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.724406 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.724421 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.724431 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:04Z","lastTransitionTime":"2026-01-30T21:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.827565 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.827631 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.827651 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.827679 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.827700 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:04Z","lastTransitionTime":"2026-01-30T21:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.930536 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.930966 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.931118 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.931271 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.931439 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:04Z","lastTransitionTime":"2026-01-30T21:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.941966 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 13:48:46.543795961 +0000 UTC Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.975433 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:04 crc kubenswrapper[4751]: E0130 21:15:04.975560 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.034271 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.034772 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.034990 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.035232 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.035517 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:05Z","lastTransitionTime":"2026-01-30T21:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.138304 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.138395 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.138409 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.138437 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.138475 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:05Z","lastTransitionTime":"2026-01-30T21:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.242010 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.242067 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.242084 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.242109 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.242125 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:05Z","lastTransitionTime":"2026-01-30T21:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.344750 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.345073 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.345234 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.345453 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.345594 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:05Z","lastTransitionTime":"2026-01-30T21:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.448993 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.449058 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.449076 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.449099 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.449116 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:05Z","lastTransitionTime":"2026-01-30T21:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.551358 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.551422 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.551445 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.551477 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.551500 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:05Z","lastTransitionTime":"2026-01-30T21:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.654150 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.654532 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.654675 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.654825 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.654946 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:05Z","lastTransitionTime":"2026-01-30T21:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.757751 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.757832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.757857 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.757886 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.757909 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:05Z","lastTransitionTime":"2026-01-30T21:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.861026 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.861082 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.861102 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.861127 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.861145 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:05Z","lastTransitionTime":"2026-01-30T21:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.865146 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.866454 4751 scope.go:117] "RemoveContainer" containerID="12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f" Jan 30 21:15:05 crc kubenswrapper[4751]: E0130 21:15:05.866737 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.885863 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.906172 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.926922 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.942493 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 20:03:33.992828629 +0000 UTC Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.948740 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.964113 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.964178 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.964198 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.964224 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.964242 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:05Z","lastTransitionTime":"2026-01-30T21:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.968231 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.974737 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.974798 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:05 crc kubenswrapper[4751]: E0130 21:15:05.974952 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:05 crc kubenswrapper[4751]: E0130 21:15:05.975112 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.975467 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:05 crc kubenswrapper[4751]: E0130 21:15:05.975863 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.990283 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.006106 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:06Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.025428 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:06Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.040849 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:06Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.067185 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.067277 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.067301 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.067357 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.067388 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:06Z","lastTransitionTime":"2026-01-30T21:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.070492 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:02Z\\\",\\\"message\\\":\\\"enshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI0130 21:15:02.132224 6280 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 7.181037ms\\\\nI0130 21:15:02.132235 6280 services_controller.go:356] Processing sync for service openshift-kube-scheduler/scheduler for network=default\\\\nI0130 21:15:02.132237 6280 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI0130 21:15:02.132271 6280 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 4.894454ms\\\\nI0130 21:15:02.132289 6280 services_controller.go:356] Processing sync for service openshift-kube-storage-version-migrator-operator/metrics for network=default\\\\nI0130 21:15:02.132281 6280 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-5sgk2\\\\nI0130 21:15:02.132313 6280 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-5sgk2\\\\nF0130 21:15:02.132347 6280 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:06Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.087760 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:06Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.100547 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:06Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.119229 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:06Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.134153 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:06Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.153803 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:06Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.167871 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:06Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.169311 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.169348 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.169358 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.169370 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.169379 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:06Z","lastTransitionTime":"2026-01-30T21:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.272320 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.272559 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.272579 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.272603 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.272621 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:06Z","lastTransitionTime":"2026-01-30T21:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.375920 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.375983 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.376001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.376026 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.376044 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:06Z","lastTransitionTime":"2026-01-30T21:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.479271 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.479323 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.479372 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.479394 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.479412 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:06Z","lastTransitionTime":"2026-01-30T21:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.581822 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.582429 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.582469 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.582495 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.582515 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:06Z","lastTransitionTime":"2026-01-30T21:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.685150 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.685468 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.685705 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.685942 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.686058 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:06Z","lastTransitionTime":"2026-01-30T21:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.741806 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:06 crc kubenswrapper[4751]: E0130 21:15:06.742015 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:06 crc kubenswrapper[4751]: E0130 21:15:06.742089 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs podName:3c30a687-0b58-4a63-b9e3-3a3624676358 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:10.742065087 +0000 UTC m=+49.487887776 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs") pod "network-metrics-daemon-c477w" (UID: "3c30a687-0b58-4a63-b9e3-3a3624676358") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.789160 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.789204 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.789220 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.789238 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.789251 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:06Z","lastTransitionTime":"2026-01-30T21:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.892215 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.892261 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.892272 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.892288 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.892302 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:06Z","lastTransitionTime":"2026-01-30T21:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.942831 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 14:48:48.48462215 +0000 UTC Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.975518 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:06 crc kubenswrapper[4751]: E0130 21:15:06.975668 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.995260 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.995374 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.995403 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.995432 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.995455 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:06Z","lastTransitionTime":"2026-01-30T21:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.099242 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.099314 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.099366 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.099396 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.099414 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:07Z","lastTransitionTime":"2026-01-30T21:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.202434 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.202495 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.202512 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.202536 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.202553 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:07Z","lastTransitionTime":"2026-01-30T21:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.306055 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.306139 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.306163 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.306192 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.306214 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:07Z","lastTransitionTime":"2026-01-30T21:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.409568 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.409625 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.409643 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.409671 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.409689 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:07Z","lastTransitionTime":"2026-01-30T21:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.513235 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.513295 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.513313 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.513403 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.513422 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:07Z","lastTransitionTime":"2026-01-30T21:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.616067 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.616125 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.616142 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.616165 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.616182 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:07Z","lastTransitionTime":"2026-01-30T21:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.719396 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.719454 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.719474 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.719503 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.719522 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:07Z","lastTransitionTime":"2026-01-30T21:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.821875 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.821939 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.821957 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.821983 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.822000 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:07Z","lastTransitionTime":"2026-01-30T21:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.924482 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.924529 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.924583 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.924606 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.924654 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:07Z","lastTransitionTime":"2026-01-30T21:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.943378 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 22:59:42.92398453 +0000 UTC Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.975677 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.975744 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:07 crc kubenswrapper[4751]: E0130 21:15:07.975846 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.975867 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:07 crc kubenswrapper[4751]: E0130 21:15:07.976023 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:07 crc kubenswrapper[4751]: E0130 21:15:07.976124 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.027236 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.027314 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.027388 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.027418 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.027437 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.129795 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.129848 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.129866 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.129888 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.129905 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.232514 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.232580 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.232603 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.232635 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.232660 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.335158 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.335211 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.335227 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.335254 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.335271 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.381041 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.381102 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.381124 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.381157 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.381181 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: E0130 21:15:08.401548 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:08Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.406605 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.406664 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.406680 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.406702 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.406720 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: E0130 21:15:08.426689 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:08Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.432464 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.432515 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.432528 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.432547 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.432565 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: E0130 21:15:08.454166 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:08Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.458882 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.458957 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.459019 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.459055 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.459079 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: E0130 21:15:08.477699 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:08Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.482890 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.482935 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.482952 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.482976 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.482993 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: E0130 21:15:08.500841 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:08Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:08 crc kubenswrapper[4751]: E0130 21:15:08.501097 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.503081 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.503124 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.503141 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.503164 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.503182 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.606706 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.606764 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.606780 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.606802 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.606820 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.709920 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.710305 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.710491 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.710675 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.710848 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.814623 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.814690 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.814709 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.814733 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.814750 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.917773 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.917809 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.917819 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.917835 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.917844 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.944495 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 00:52:36.886358016 +0000 UTC Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.974948 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:08 crc kubenswrapper[4751]: E0130 21:15:08.975079 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.020172 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.020255 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.020277 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.020304 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.020321 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:09Z","lastTransitionTime":"2026-01-30T21:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.123565 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.123641 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.123662 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.123694 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.123714 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:09Z","lastTransitionTime":"2026-01-30T21:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.226214 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.226287 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.226363 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.226399 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.226420 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:09Z","lastTransitionTime":"2026-01-30T21:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.329184 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.329247 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.329265 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.329296 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.329319 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:09Z","lastTransitionTime":"2026-01-30T21:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.432715 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.432778 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.432797 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.432822 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.432840 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:09Z","lastTransitionTime":"2026-01-30T21:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.536121 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.536197 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.536220 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.536252 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.536275 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:09Z","lastTransitionTime":"2026-01-30T21:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.639475 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.639542 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.639560 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.639585 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.639601 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:09Z","lastTransitionTime":"2026-01-30T21:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.742017 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.742077 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.742094 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.742119 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.742137 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:09Z","lastTransitionTime":"2026-01-30T21:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.845659 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.845739 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.845754 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.845780 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.845797 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:09Z","lastTransitionTime":"2026-01-30T21:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.945048 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 10:06:58.65643281 +0000 UTC Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.948501 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.948549 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.948562 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.948581 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.948594 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:09Z","lastTransitionTime":"2026-01-30T21:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.974845 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.974910 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:09 crc kubenswrapper[4751]: E0130 21:15:09.974991 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.974860 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:09 crc kubenswrapper[4751]: E0130 21:15:09.975168 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:09 crc kubenswrapper[4751]: E0130 21:15:09.975441 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.052104 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.052186 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.052204 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.052225 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.052243 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:10Z","lastTransitionTime":"2026-01-30T21:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.155201 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.155233 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.155244 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.155257 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.155267 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:10Z","lastTransitionTime":"2026-01-30T21:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.261848 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.261921 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.261959 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.261992 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.262018 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:10Z","lastTransitionTime":"2026-01-30T21:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.364872 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.364935 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.364947 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.364986 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.365001 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:10Z","lastTransitionTime":"2026-01-30T21:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.468449 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.468502 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.468519 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.468543 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.468560 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:10Z","lastTransitionTime":"2026-01-30T21:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.571106 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.571436 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.572250 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.572586 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.572903 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:10Z","lastTransitionTime":"2026-01-30T21:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.676222 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.676993 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.677010 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.677027 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.677038 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:10Z","lastTransitionTime":"2026-01-30T21:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.781385 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.781528 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.781550 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.781575 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.781595 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:10Z","lastTransitionTime":"2026-01-30T21:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.789146 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:10 crc kubenswrapper[4751]: E0130 21:15:10.789354 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:10 crc kubenswrapper[4751]: E0130 21:15:10.789440 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs podName:3c30a687-0b58-4a63-b9e3-3a3624676358 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:18.789420345 +0000 UTC m=+57.535243004 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs") pod "network-metrics-daemon-c477w" (UID: "3c30a687-0b58-4a63-b9e3-3a3624676358") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.884231 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.884608 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.884782 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.884926 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.885078 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:10Z","lastTransitionTime":"2026-01-30T21:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.945138 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 04:51:57.985788634 +0000 UTC Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.975493 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:10 crc kubenswrapper[4751]: E0130 21:15:10.975620 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.988062 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.988319 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.988542 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.988684 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.988816 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:10Z","lastTransitionTime":"2026-01-30T21:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.092729 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.092782 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.092798 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.092824 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.092846 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:11Z","lastTransitionTime":"2026-01-30T21:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.195159 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.195527 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.195834 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.196056 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.196246 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:11Z","lastTransitionTime":"2026-01-30T21:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.299770 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.299832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.299857 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.299888 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.299912 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:11Z","lastTransitionTime":"2026-01-30T21:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.403598 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.403974 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.404110 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.404234 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.404387 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:11Z","lastTransitionTime":"2026-01-30T21:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.507654 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.507702 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.507714 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.507731 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.507745 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:11Z","lastTransitionTime":"2026-01-30T21:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.611095 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.611155 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.611171 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.611194 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.611211 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:11Z","lastTransitionTime":"2026-01-30T21:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.714580 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.714643 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.714662 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.714686 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.714703 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:11Z","lastTransitionTime":"2026-01-30T21:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.817657 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.817715 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.817739 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.817765 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.817786 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:11Z","lastTransitionTime":"2026-01-30T21:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.920797 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.920838 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.920847 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.920861 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.920873 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:11Z","lastTransitionTime":"2026-01-30T21:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.946263 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 14:14:19.622054116 +0000 UTC Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.974796 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.974867 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:11 crc kubenswrapper[4751]: E0130 21:15:11.974965 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.974992 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:11 crc kubenswrapper[4751]: E0130 21:15:11.975048 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:11 crc kubenswrapper[4751]: E0130 21:15:11.975093 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.994480 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:11Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.010203 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.022807 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.022846 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.022859 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.022874 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.022885 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:12Z","lastTransitionTime":"2026-01-30T21:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.031059 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.044361 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.057904 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.072798 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.096277 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.116946 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.126384 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.126426 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.126437 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.126454 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.126466 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:12Z","lastTransitionTime":"2026-01-30T21:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.131535 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.149916 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.169761 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.191841 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.208787 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.228941 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.228979 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.228989 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.229009 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.229023 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:12Z","lastTransitionTime":"2026-01-30T21:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.238315 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:02Z\\\",\\\"message\\\":\\\"enshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI0130 21:15:02.132224 6280 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 7.181037ms\\\\nI0130 21:15:02.132235 6280 services_controller.go:356] Processing sync for service openshift-kube-scheduler/scheduler for network=default\\\\nI0130 21:15:02.132237 6280 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI0130 21:15:02.132271 6280 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 4.894454ms\\\\nI0130 21:15:02.132289 6280 services_controller.go:356] Processing sync for service openshift-kube-storage-version-migrator-operator/metrics for network=default\\\\nI0130 21:15:02.132281 6280 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-5sgk2\\\\nI0130 21:15:02.132313 6280 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-5sgk2\\\\nF0130 21:15:02.132347 6280 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.257092 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.272781 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.331970 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.332053 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.332082 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.332117 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.332139 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:12Z","lastTransitionTime":"2026-01-30T21:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.435247 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.435389 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.435419 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.435450 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.435474 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:12Z","lastTransitionTime":"2026-01-30T21:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.537984 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.538031 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.538042 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.538062 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.538073 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:12Z","lastTransitionTime":"2026-01-30T21:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.641809 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.641866 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.641883 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.641907 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.641924 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:12Z","lastTransitionTime":"2026-01-30T21:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.744432 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.744505 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.744530 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.744553 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.744569 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:12Z","lastTransitionTime":"2026-01-30T21:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.847003 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.847068 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.847090 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.847120 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.847144 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:12Z","lastTransitionTime":"2026-01-30T21:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.947264 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 01:26:04.098821301 +0000 UTC Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.949292 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.949342 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.949354 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.949371 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.949383 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:12Z","lastTransitionTime":"2026-01-30T21:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.975099 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:12 crc kubenswrapper[4751]: E0130 21:15:12.975274 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.051534 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.051606 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.051631 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.051659 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.051680 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:13Z","lastTransitionTime":"2026-01-30T21:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.155202 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.155254 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.155274 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.155292 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.155304 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:13Z","lastTransitionTime":"2026-01-30T21:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.258542 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.258624 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.258641 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.258666 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.258683 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:13Z","lastTransitionTime":"2026-01-30T21:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.361235 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.361293 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.361310 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.361360 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.361378 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:13Z","lastTransitionTime":"2026-01-30T21:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.464449 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.464525 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.464548 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.464576 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.464598 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:13Z","lastTransitionTime":"2026-01-30T21:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.567459 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.567977 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.568183 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.568408 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.568611 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:13Z","lastTransitionTime":"2026-01-30T21:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.687975 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.688295 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.688597 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.688769 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.688928 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:13Z","lastTransitionTime":"2026-01-30T21:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.721679 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.721788 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:15:45.721770484 +0000 UTC m=+84.467593133 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.721894 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.722028 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.722075 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:45.722065181 +0000 UTC m=+84.467887830 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.722219 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.722273 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.722348 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.722397 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:45.722387568 +0000 UTC m=+84.468210217 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.722619 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.722701 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.722722 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.722825 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:45.722799698 +0000 UTC m=+84.468622347 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.791791 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.791871 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.791894 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.791925 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.791948 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:13Z","lastTransitionTime":"2026-01-30T21:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.823708 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.823912 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.823960 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.823980 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.824081 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:45.824055987 +0000 UTC m=+84.569878676 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.894460 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.894594 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.894635 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.894653 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.894664 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:13Z","lastTransitionTime":"2026-01-30T21:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.948395 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 15:37:41.417987471 +0000 UTC Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.974765 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.974767 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.974847 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.975010 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.975099 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.975167 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.997202 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.997260 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.997311 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.997361 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.997379 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:13Z","lastTransitionTime":"2026-01-30T21:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.100574 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.100643 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.100664 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.100692 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.100713 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:14Z","lastTransitionTime":"2026-01-30T21:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.203739 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.203812 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.203847 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.203867 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.203878 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:14Z","lastTransitionTime":"2026-01-30T21:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.306991 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.307053 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.307070 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.307094 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.307113 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:14Z","lastTransitionTime":"2026-01-30T21:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.409707 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.409789 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.409813 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.409842 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.409863 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:14Z","lastTransitionTime":"2026-01-30T21:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.512209 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.512266 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.512283 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.512306 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.512352 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:14Z","lastTransitionTime":"2026-01-30T21:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.615007 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.615074 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.615092 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.615116 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.615137 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:14Z","lastTransitionTime":"2026-01-30T21:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.719667 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.719757 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.719775 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.719833 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.719854 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:14Z","lastTransitionTime":"2026-01-30T21:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.823252 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.823308 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.823352 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.823376 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.823394 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:14Z","lastTransitionTime":"2026-01-30T21:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.926178 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.926244 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.926261 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.926289 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.926307 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:14Z","lastTransitionTime":"2026-01-30T21:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.949009 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 11:22:20.490884685 +0000 UTC Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.975482 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:14 crc kubenswrapper[4751]: E0130 21:15:14.975678 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.029133 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.029175 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.029187 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.029200 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.029210 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:15Z","lastTransitionTime":"2026-01-30T21:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.132560 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.132603 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.132617 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.132634 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.132645 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:15Z","lastTransitionTime":"2026-01-30T21:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.234847 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.234889 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.234901 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.234917 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.234927 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:15Z","lastTransitionTime":"2026-01-30T21:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.337832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.337910 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.337932 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.337956 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.337973 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:15Z","lastTransitionTime":"2026-01-30T21:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.441790 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.441839 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.441852 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.441869 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.441883 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:15Z","lastTransitionTime":"2026-01-30T21:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.544244 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.544300 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.544314 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.544378 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.544397 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:15Z","lastTransitionTime":"2026-01-30T21:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.646540 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.646582 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.646593 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.646609 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.646621 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:15Z","lastTransitionTime":"2026-01-30T21:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.749478 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.749554 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.749576 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.749602 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.749621 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:15Z","lastTransitionTime":"2026-01-30T21:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.853051 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.853101 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.853110 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.853126 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.853138 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:15Z","lastTransitionTime":"2026-01-30T21:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.949344 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 12:23:46.664502312 +0000 UTC Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.955847 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.956093 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.956322 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.956522 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.956644 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:15Z","lastTransitionTime":"2026-01-30T21:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.975126 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.975185 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:15 crc kubenswrapper[4751]: E0130 21:15:15.975290 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:15 crc kubenswrapper[4751]: E0130 21:15:15.975600 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.975960 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:15 crc kubenswrapper[4751]: E0130 21:15:15.976286 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.059201 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.059263 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.059276 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.059291 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.059303 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:16Z","lastTransitionTime":"2026-01-30T21:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.162403 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.162503 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.162653 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.162689 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.162714 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:16Z","lastTransitionTime":"2026-01-30T21:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.266006 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.266060 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.266078 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.266103 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.266120 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:16Z","lastTransitionTime":"2026-01-30T21:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.369076 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.369210 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.369238 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.369272 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.369296 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:16Z","lastTransitionTime":"2026-01-30T21:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.472295 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.472431 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.472457 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.472485 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.472502 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:16Z","lastTransitionTime":"2026-01-30T21:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.575456 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.575514 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.575536 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.575567 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.575589 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:16Z","lastTransitionTime":"2026-01-30T21:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.678511 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.678571 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.678591 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.678619 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.678640 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:16Z","lastTransitionTime":"2026-01-30T21:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.781518 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.781587 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.781609 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.781638 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.781662 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:16Z","lastTransitionTime":"2026-01-30T21:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.884270 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.884358 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.884378 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.884404 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.884422 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:16Z","lastTransitionTime":"2026-01-30T21:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.949583 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 02:45:34.10408856 +0000 UTC Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.975532 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:16 crc kubenswrapper[4751]: E0130 21:15:16.975721 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.987407 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.987478 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.987502 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.987530 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.987555 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:16Z","lastTransitionTime":"2026-01-30T21:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.090846 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.090898 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.090917 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.090938 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.090955 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:17Z","lastTransitionTime":"2026-01-30T21:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.194253 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.194316 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.194368 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.194393 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.194411 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:17Z","lastTransitionTime":"2026-01-30T21:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.297503 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.297564 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.297588 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.297618 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.297639 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:17Z","lastTransitionTime":"2026-01-30T21:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.400597 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.400672 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.400695 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.400725 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.400748 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:17Z","lastTransitionTime":"2026-01-30T21:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.504012 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.504488 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.504961 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.505197 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.505438 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:17Z","lastTransitionTime":"2026-01-30T21:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.608636 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.609000 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.609161 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.609431 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.609617 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:17Z","lastTransitionTime":"2026-01-30T21:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.713763 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.714218 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.714498 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.714705 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.714909 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:17Z","lastTransitionTime":"2026-01-30T21:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.812758 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.818532 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.818605 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.818627 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.818662 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.818682 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:17Z","lastTransitionTime":"2026-01-30T21:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.828394 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.847968 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:02Z\\\",\\\"message\\\":\\\"enshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI0130 21:15:02.132224 6280 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 7.181037ms\\\\nI0130 21:15:02.132235 6280 services_controller.go:356] Processing sync for service openshift-kube-scheduler/scheduler for network=default\\\\nI0130 21:15:02.132237 6280 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI0130 21:15:02.132271 6280 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 4.894454ms\\\\nI0130 21:15:02.132289 6280 services_controller.go:356] Processing sync for service openshift-kube-storage-version-migrator-operator/metrics for network=default\\\\nI0130 21:15:02.132281 6280 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-5sgk2\\\\nI0130 21:15:02.132313 6280 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-5sgk2\\\\nF0130 21:15:02.132347 6280 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:17Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.865757 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:17Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.881836 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:17Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.899097 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:17Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.917802 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:17Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.922272 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.922366 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.922392 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.922418 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.922439 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:17Z","lastTransitionTime":"2026-01-30T21:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.937109 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:17Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.949752 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 21:17:00.142597209 +0000 UTC Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.958320 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:17Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.975613 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.975727 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.975802 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.976267 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:17Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:17 crc kubenswrapper[4751]: E0130 21:15:17.976774 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:17 crc kubenswrapper[4751]: E0130 21:15:17.976522 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.976909 4751 scope.go:117] "RemoveContainer" containerID="12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f" Jan 30 21:15:17 crc kubenswrapper[4751]: E0130 21:15:17.976924 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.001046 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:17Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.023187 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.024696 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.024758 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.024776 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.024803 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.024822 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.040496 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.062385 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.083573 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.100987 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.113154 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.124710 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.127838 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.127879 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.127897 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.127921 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.127938 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.230869 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.230919 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.230931 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.230949 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.230960 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.336427 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.336501 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.336515 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.336534 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.336545 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.360961 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/1.log" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.366242 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerStarted","Data":"520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab"} Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.367241 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.386151 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"876db467-5de4-469d-926f-72bd7360ff97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751b154b2de8ba6d171eac7b82c77498ed54b38d4c6759e35dacf49c57e3f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27dab33e7af8b89e8b6a3f6d3beff399121ca17e50406b83ec8a553598834ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037db6ab27fdac0f9290b2d34f883cc22ac3c79f2b52a16e6579df97474da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.410609 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.435301 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.439737 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.439782 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.439792 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.439811 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.439822 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.453245 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.478582 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:02Z\\\",\\\"message\\\":\\\"enshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI0130 21:15:02.132224 6280 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 7.181037ms\\\\nI0130 21:15:02.132235 6280 services_controller.go:356] Processing sync for service openshift-kube-scheduler/scheduler for network=default\\\\nI0130 21:15:02.132237 6280 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI0130 21:15:02.132271 6280 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 4.894454ms\\\\nI0130 21:15:02.132289 6280 services_controller.go:356] Processing sync for service openshift-kube-storage-version-migrator-operator/metrics for network=default\\\\nI0130 21:15:02.132281 6280 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-5sgk2\\\\nI0130 21:15:02.132313 6280 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-5sgk2\\\\nF0130 21:15:02.132347 6280 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.496659 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.511510 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.525200 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.542532 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.542578 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.542593 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.542613 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.542630 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.543944 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.561004 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.575110 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.590069 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.599496 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.612463 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.622566 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.635193 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.644804 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.644850 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.644866 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.644887 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.644900 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.650508 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.709747 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.709797 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.709811 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.709832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.709848 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: E0130 21:15:18.722673 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.726768 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.726829 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.726841 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.726866 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.726879 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: E0130 21:15:18.744288 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.747551 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.747594 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.747610 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.747628 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.747641 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: E0130 21:15:18.763398 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.767776 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.767816 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.767831 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.767850 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.767860 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: E0130 21:15:18.779596 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.783266 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.783301 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.783315 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.783352 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.783369 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: E0130 21:15:18.795370 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: E0130 21:15:18.795597 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.797245 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.797314 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.797351 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.797385 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.797400 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.874024 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:18 crc kubenswrapper[4751]: E0130 21:15:18.874182 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:18 crc kubenswrapper[4751]: E0130 21:15:18.874258 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs podName:3c30a687-0b58-4a63-b9e3-3a3624676358 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:34.874237641 +0000 UTC m=+73.620060290 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs") pod "network-metrics-daemon-c477w" (UID: "3c30a687-0b58-4a63-b9e3-3a3624676358") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.900737 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.900782 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.900822 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.900847 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.900859 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.950230 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 18:39:10.853856798 +0000 UTC Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.974896 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:18 crc kubenswrapper[4751]: E0130 21:15:18.975163 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.004709 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.004794 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.004818 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.004850 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.004872 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:19Z","lastTransitionTime":"2026-01-30T21:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.108585 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.108710 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.108768 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.108801 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.108823 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:19Z","lastTransitionTime":"2026-01-30T21:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.211737 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.211800 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.211817 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.211842 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.211860 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:19Z","lastTransitionTime":"2026-01-30T21:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.320947 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.321027 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.321054 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.321086 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.321110 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:19Z","lastTransitionTime":"2026-01-30T21:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.373664 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/2.log" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.374850 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/1.log" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.379756 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerID="520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab" exitCode=1 Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.379812 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab"} Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.379870 4751 scope.go:117] "RemoveContainer" containerID="12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.381034 4751 scope.go:117] "RemoveContainer" containerID="520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab" Jan 30 21:15:19 crc kubenswrapper[4751]: E0130 21:15:19.381308 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.409137 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.424964 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.425033 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.425051 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.425079 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.425100 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:19Z","lastTransitionTime":"2026-01-30T21:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.429488 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.446617 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.461669 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"876db467-5de4-469d-926f-72bd7360ff97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751b154b2de8ba6d171eac7b82c77498ed54b38d4c6759e35dacf49c57e3f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27dab33e7af8b89e8b6a3f6d3beff399121ca17e50406b83ec8a553598834ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037db6ab27fdac0f9290b2d34f883cc22ac3c79f2b52a16e6579df97474da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.481206 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.497859 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.511170 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.527056 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.527121 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.527138 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.527162 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.527180 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:19Z","lastTransitionTime":"2026-01-30T21:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.529719 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.546083 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.562140 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.580118 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.598157 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.630286 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.630347 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.630359 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.630377 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.630391 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:19Z","lastTransitionTime":"2026-01-30T21:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.658141 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:02Z\\\",\\\"message\\\":\\\"enshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI0130 21:15:02.132224 6280 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 7.181037ms\\\\nI0130 21:15:02.132235 6280 services_controller.go:356] Processing sync for service openshift-kube-scheduler/scheduler for network=default\\\\nI0130 21:15:02.132237 6280 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI0130 21:15:02.132271 6280 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 4.894454ms\\\\nI0130 21:15:02.132289 6280 services_controller.go:356] Processing sync for service openshift-kube-storage-version-migrator-operator/metrics for network=default\\\\nI0130 21:15:02.132281 6280 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-5sgk2\\\\nI0130 21:15:02.132313 6280 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-5sgk2\\\\nF0130 21:15:02.132347 6280 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"ss-canary for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0130 21:15:18.987753 6489 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0130 21:15:18.987765 6489 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0130 21:15:18.987796 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z]\\\\nI0130 21:15:18.987648 6489 obj_retry.go:303] Retry object setup: *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.674030 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.686983 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.701032 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.712979 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.732801 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.732861 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.732882 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.732906 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.732924 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:19Z","lastTransitionTime":"2026-01-30T21:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.834914 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.834977 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.834994 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.835020 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.835041 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:19Z","lastTransitionTime":"2026-01-30T21:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.938243 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.938377 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.938405 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.938433 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.938455 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:19Z","lastTransitionTime":"2026-01-30T21:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.951147 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 17:23:57.127728638 +0000 UTC Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.975596 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.975679 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:19 crc kubenswrapper[4751]: E0130 21:15:19.975744 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:19 crc kubenswrapper[4751]: E0130 21:15:19.975888 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.975684 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:19 crc kubenswrapper[4751]: E0130 21:15:19.976142 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.041571 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.041637 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.041663 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.041695 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.041718 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:20Z","lastTransitionTime":"2026-01-30T21:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.144858 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.144907 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.144925 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.144945 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.144961 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:20Z","lastTransitionTime":"2026-01-30T21:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.248481 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.248546 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.248567 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.248592 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.248612 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:20Z","lastTransitionTime":"2026-01-30T21:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.351110 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.351171 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.351189 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.351286 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.351307 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:20Z","lastTransitionTime":"2026-01-30T21:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.385947 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/2.log" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.391428 4751 scope.go:117] "RemoveContainer" containerID="520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab" Jan 30 21:15:20 crc kubenswrapper[4751]: E0130 21:15:20.391775 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.410051 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"876db467-5de4-469d-926f-72bd7360ff97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751b154b2de8ba6d171eac7b82c77498ed54b38d4c6759e35dacf49c57e3f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27dab33e7af8b89e8b6a3f6d3beff399121ca17e50406b83ec8a553598834ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037db6ab27fdac0f9290b2d34f883cc22ac3c79f2b52a16e6579df97474da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.430514 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.455053 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.455216 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.455238 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.455297 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.455319 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:20Z","lastTransitionTime":"2026-01-30T21:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.466153 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"ss-canary for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0130 21:15:18.987753 6489 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0130 21:15:18.987765 6489 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0130 21:15:18.987796 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z]\\\\nI0130 21:15:18.987648 6489 obj_retry.go:303] Retry object setup: *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.485189 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.502485 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.520248 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.540854 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.557804 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.557858 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.557876 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.557900 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.557917 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:20Z","lastTransitionTime":"2026-01-30T21:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.560437 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.580429 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.596009 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.611830 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.634922 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.654264 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.661254 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.661322 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.661388 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.661419 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.661437 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:20Z","lastTransitionTime":"2026-01-30T21:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.678166 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.698948 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.717821 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.737563 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.764136 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.764234 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.764252 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.764308 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.764381 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:20Z","lastTransitionTime":"2026-01-30T21:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.867965 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.868032 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.868052 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.868080 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.868098 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:20Z","lastTransitionTime":"2026-01-30T21:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.952075 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 09:52:19.698946166 +0000 UTC Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.971156 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.971219 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.971242 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.971269 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.971286 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:20Z","lastTransitionTime":"2026-01-30T21:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.975126 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:20 crc kubenswrapper[4751]: E0130 21:15:20.975507 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.074965 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.075223 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.075506 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.075751 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.075955 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:21Z","lastTransitionTime":"2026-01-30T21:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.178413 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.178731 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.178896 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.179073 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.179244 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:21Z","lastTransitionTime":"2026-01-30T21:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.282673 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.282712 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.282731 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.282777 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.282798 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:21Z","lastTransitionTime":"2026-01-30T21:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.386311 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.386397 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.386415 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.386441 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.386459 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:21Z","lastTransitionTime":"2026-01-30T21:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.488858 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.489157 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.489277 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.489402 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.489507 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:21Z","lastTransitionTime":"2026-01-30T21:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.592686 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.592792 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.592812 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.592840 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.592861 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:21Z","lastTransitionTime":"2026-01-30T21:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.695232 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.695287 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.695299 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.695317 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.695359 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:21Z","lastTransitionTime":"2026-01-30T21:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.797700 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.797776 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.797795 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.797813 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.797860 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:21Z","lastTransitionTime":"2026-01-30T21:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.900579 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.900645 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.900662 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.900685 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.900702 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:21Z","lastTransitionTime":"2026-01-30T21:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.952250 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 13:46:01.579456107 +0000 UTC Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.974862 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.974894 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:21 crc kubenswrapper[4751]: E0130 21:15:21.975082 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:21 crc kubenswrapper[4751]: E0130 21:15:21.975233 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.975488 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:21 crc kubenswrapper[4751]: E0130 21:15:21.975771 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.995638 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.003644 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.003708 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.003729 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.003759 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.003779 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:22Z","lastTransitionTime":"2026-01-30T21:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.020894 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.039056 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.063458 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.086132 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.105516 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.105581 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.105604 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.105633 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.105655 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:22Z","lastTransitionTime":"2026-01-30T21:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.108615 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.127774 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.145658 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"876db467-5de4-469d-926f-72bd7360ff97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751b154b2de8ba6d171eac7b82c77498ed54b38d4c6759e35dacf49c57e3f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27dab33e7af8b89e8b6a3f6d3beff399121ca17e50406b83ec8a553598834ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037db6ab27fdac0f9290b2d34f883cc22ac3c79f2b52a16e6579df97474da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.168972 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.200918 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"ss-canary for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0130 21:15:18.987753 6489 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0130 21:15:18.987765 6489 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0130 21:15:18.987796 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z]\\\\nI0130 21:15:18.987648 6489 obj_retry.go:303] Retry object setup: *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.208616 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.208677 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.208697 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.208723 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.208741 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:22Z","lastTransitionTime":"2026-01-30T21:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.216738 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.232029 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.248787 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.268533 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.290519 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.312191 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.312255 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.312279 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.312309 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.312361 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:22Z","lastTransitionTime":"2026-01-30T21:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.312808 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.328662 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.415957 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.416020 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.416036 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.416085 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.416103 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:22Z","lastTransitionTime":"2026-01-30T21:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.518974 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.519022 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.519038 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.519061 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.519077 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:22Z","lastTransitionTime":"2026-01-30T21:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.621541 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.621617 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.621636 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.621660 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.621677 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:22Z","lastTransitionTime":"2026-01-30T21:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.725056 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.725154 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.725173 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.725231 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.725250 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:22Z","lastTransitionTime":"2026-01-30T21:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.828015 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.828106 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.828155 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.828180 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.828197 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:22Z","lastTransitionTime":"2026-01-30T21:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.930732 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.930774 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.930784 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.930799 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.930811 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:22Z","lastTransitionTime":"2026-01-30T21:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.953281 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 20:53:47.339332728 +0000 UTC Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.975730 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:22 crc kubenswrapper[4751]: E0130 21:15:22.975970 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.034225 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.034305 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.034362 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.034389 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.034407 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:23Z","lastTransitionTime":"2026-01-30T21:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.137274 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.137320 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.137365 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.137387 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.137404 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:23Z","lastTransitionTime":"2026-01-30T21:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.240857 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.240910 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.240924 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.240945 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.240957 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:23Z","lastTransitionTime":"2026-01-30T21:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.343816 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.343848 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.343859 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.343877 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.343887 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:23Z","lastTransitionTime":"2026-01-30T21:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.446653 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.446718 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.446739 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.446768 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.446791 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:23Z","lastTransitionTime":"2026-01-30T21:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.550310 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.550421 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.550449 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.550478 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.550500 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:23Z","lastTransitionTime":"2026-01-30T21:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.653003 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.653273 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.653516 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.653738 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.653922 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:23Z","lastTransitionTime":"2026-01-30T21:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.757346 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.757387 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.757398 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.757415 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.757429 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:23Z","lastTransitionTime":"2026-01-30T21:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.860071 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.860466 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.860626 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.860798 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.861009 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:23Z","lastTransitionTime":"2026-01-30T21:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.954256 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 22:32:23.18169999 +0000 UTC Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.964506 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.964577 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.964592 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.964613 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.964627 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:23Z","lastTransitionTime":"2026-01-30T21:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.974895 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.974968 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.975222 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:23 crc kubenswrapper[4751]: E0130 21:15:23.975454 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:23 crc kubenswrapper[4751]: E0130 21:15:23.975588 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:23 crc kubenswrapper[4751]: E0130 21:15:23.975665 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.067852 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.068132 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.068279 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.068505 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.068653 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:24Z","lastTransitionTime":"2026-01-30T21:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.171756 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.171798 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.171816 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.171845 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.171867 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:24Z","lastTransitionTime":"2026-01-30T21:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.275124 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.275425 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.275648 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.275840 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.275994 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:24Z","lastTransitionTime":"2026-01-30T21:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.380608 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.380665 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.380683 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.380708 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.380760 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:24Z","lastTransitionTime":"2026-01-30T21:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.482791 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.482846 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.482864 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.482890 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.482910 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:24Z","lastTransitionTime":"2026-01-30T21:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.585858 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.585919 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.585936 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.585960 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.586015 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:24Z","lastTransitionTime":"2026-01-30T21:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.689788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.689848 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.689872 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.689902 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.689924 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:24Z","lastTransitionTime":"2026-01-30T21:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.792598 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.792644 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.792660 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.792682 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.792699 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:24Z","lastTransitionTime":"2026-01-30T21:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.896252 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.896316 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.896376 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.896406 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.896424 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:24Z","lastTransitionTime":"2026-01-30T21:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.955179 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 00:31:28.865214967 +0000 UTC Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.975690 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:24 crc kubenswrapper[4751]: E0130 21:15:24.975941 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.000232 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.000291 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.000314 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.000398 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.000423 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:25Z","lastTransitionTime":"2026-01-30T21:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.105683 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.105734 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.105746 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.105763 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.105777 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:25Z","lastTransitionTime":"2026-01-30T21:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.208825 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.209169 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.209445 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.209665 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.209773 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:25Z","lastTransitionTime":"2026-01-30T21:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.312551 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.312615 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.312633 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.312659 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.312682 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:25Z","lastTransitionTime":"2026-01-30T21:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.415499 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.415569 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.415587 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.415613 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.415631 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:25Z","lastTransitionTime":"2026-01-30T21:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.518948 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.518997 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.519016 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.519040 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.519057 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:25Z","lastTransitionTime":"2026-01-30T21:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.621438 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.621499 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.621521 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.621546 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.621569 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:25Z","lastTransitionTime":"2026-01-30T21:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.724500 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.724537 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.724551 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.724568 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.724581 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:25Z","lastTransitionTime":"2026-01-30T21:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.827392 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.827443 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.827460 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.827483 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.827649 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:25Z","lastTransitionTime":"2026-01-30T21:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.930299 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.930415 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.930434 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.930868 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.931552 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:25Z","lastTransitionTime":"2026-01-30T21:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.955931 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 07:31:13.067896086 +0000 UTC Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.975282 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.975416 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:25 crc kubenswrapper[4751]: E0130 21:15:25.975502 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:25 crc kubenswrapper[4751]: E0130 21:15:25.975709 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.975933 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:25 crc kubenswrapper[4751]: E0130 21:15:25.976038 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.034041 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.034084 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.034097 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.034115 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.034130 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:26Z","lastTransitionTime":"2026-01-30T21:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.136970 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.137023 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.137033 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.137049 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.137059 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:26Z","lastTransitionTime":"2026-01-30T21:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.239608 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.239651 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.239661 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.239678 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.239688 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:26Z","lastTransitionTime":"2026-01-30T21:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.341765 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.341829 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.341846 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.341870 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.341889 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:26Z","lastTransitionTime":"2026-01-30T21:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.444252 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.444309 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.444318 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.444349 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.444359 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:26Z","lastTransitionTime":"2026-01-30T21:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.547266 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.547355 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.547372 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.547404 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.547421 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:26Z","lastTransitionTime":"2026-01-30T21:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.650030 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.650093 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.650112 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.650141 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.650159 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:26Z","lastTransitionTime":"2026-01-30T21:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.752774 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.752830 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.752841 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.752857 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.752867 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:26Z","lastTransitionTime":"2026-01-30T21:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.854942 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.854978 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.854989 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.855001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.855010 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:26Z","lastTransitionTime":"2026-01-30T21:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.956124 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 06:27:15.261660972 +0000 UTC Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.957973 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.958023 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.958064 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.958091 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.958113 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:26Z","lastTransitionTime":"2026-01-30T21:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.974986 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:26 crc kubenswrapper[4751]: E0130 21:15:26.975147 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.060307 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.060401 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.060419 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.060450 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.060469 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:27Z","lastTransitionTime":"2026-01-30T21:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.163714 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.163767 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.163784 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.163808 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.163824 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:27Z","lastTransitionTime":"2026-01-30T21:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.265805 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.265853 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.265872 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.265894 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.265912 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:27Z","lastTransitionTime":"2026-01-30T21:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.368569 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.368595 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.368604 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.368616 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.368625 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:27Z","lastTransitionTime":"2026-01-30T21:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.471034 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.471072 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.471082 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.471099 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.471110 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:27Z","lastTransitionTime":"2026-01-30T21:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.573516 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.573547 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.573555 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.573567 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.573576 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:27Z","lastTransitionTime":"2026-01-30T21:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.675751 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.675785 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.675794 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.675806 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.675815 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:27Z","lastTransitionTime":"2026-01-30T21:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.778497 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.778543 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.778552 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.778571 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.778586 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:27Z","lastTransitionTime":"2026-01-30T21:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.881763 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.881824 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.881844 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.881867 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.881884 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:27Z","lastTransitionTime":"2026-01-30T21:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.956628 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 10:49:38.851672945 +0000 UTC Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.976311 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:27 crc kubenswrapper[4751]: E0130 21:15:27.976462 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.976561 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.976619 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:27 crc kubenswrapper[4751]: E0130 21:15:27.976731 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:27 crc kubenswrapper[4751]: E0130 21:15:27.976937 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.984006 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.984037 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.984048 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.984062 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.984097 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:27Z","lastTransitionTime":"2026-01-30T21:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.086420 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.086483 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.086501 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.086527 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.086547 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.189422 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.189491 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.189510 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.189538 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.189557 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.293148 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.293237 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.293265 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.293302 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.293365 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.396624 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.396683 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.396703 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.396732 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.396753 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.499832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.499898 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.499917 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.499941 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.499957 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.602823 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.602865 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.602879 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.602897 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.602906 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.705571 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.705648 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.705671 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.705706 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.705726 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.808411 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.808450 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.808470 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.808488 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.808501 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.809714 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.809755 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.809773 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.809789 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.809801 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: E0130 21:15:28.825860 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.829777 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.829841 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.829859 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.829885 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.829902 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: E0130 21:15:28.847722 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.851719 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.851778 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.851797 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.851820 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.851838 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: E0130 21:15:28.870385 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.873875 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.873902 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.873911 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.873926 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.873937 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: E0130 21:15:28.890918 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.895025 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.895056 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.895065 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.895079 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.895089 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: E0130 21:15:28.908296 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:28 crc kubenswrapper[4751]: E0130 21:15:28.908556 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.910462 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.910492 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.910501 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.910513 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.910522 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.957116 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 12:56:48.089561854 +0000 UTC Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.975523 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:28 crc kubenswrapper[4751]: E0130 21:15:28.975708 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.013234 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.013267 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.013280 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.013298 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.013310 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:29Z","lastTransitionTime":"2026-01-30T21:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.115680 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.115700 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.115712 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.115725 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.115733 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:29Z","lastTransitionTime":"2026-01-30T21:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.218471 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.218495 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.218506 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.218517 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.218525 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:29Z","lastTransitionTime":"2026-01-30T21:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.321425 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.321501 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.321526 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.321554 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.321572 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:29Z","lastTransitionTime":"2026-01-30T21:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.424278 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.424358 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.424375 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.424399 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.424415 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:29Z","lastTransitionTime":"2026-01-30T21:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.526764 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.526818 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.526883 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.526900 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.526936 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:29Z","lastTransitionTime":"2026-01-30T21:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.629046 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.629078 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.629089 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.629105 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.629115 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:29Z","lastTransitionTime":"2026-01-30T21:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.731972 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.732009 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.732021 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.732036 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.732047 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:29Z","lastTransitionTime":"2026-01-30T21:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.834222 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.834252 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.834263 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.834278 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.834289 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:29Z","lastTransitionTime":"2026-01-30T21:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.936917 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.936980 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.936997 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.937021 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.937038 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:29Z","lastTransitionTime":"2026-01-30T21:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.957422 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 12:57:42.66713939 +0000 UTC Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.974950 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.974993 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.975013 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:29 crc kubenswrapper[4751]: E0130 21:15:29.975045 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:29 crc kubenswrapper[4751]: E0130 21:15:29.975109 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:29 crc kubenswrapper[4751]: E0130 21:15:29.975224 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.039752 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.039788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.039801 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.039818 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.039831 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:30Z","lastTransitionTime":"2026-01-30T21:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.142366 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.142416 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.142433 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.142456 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.142472 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:30Z","lastTransitionTime":"2026-01-30T21:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.244653 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.244698 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.244708 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.244723 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.244732 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:30Z","lastTransitionTime":"2026-01-30T21:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.346908 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.346952 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.346962 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.346976 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.346986 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:30Z","lastTransitionTime":"2026-01-30T21:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.448600 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.448640 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.448649 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.448665 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.448676 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:30Z","lastTransitionTime":"2026-01-30T21:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.551516 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.551556 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.551566 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.551582 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.551592 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:30Z","lastTransitionTime":"2026-01-30T21:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.654790 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.654835 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.654846 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.654863 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.654875 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:30Z","lastTransitionTime":"2026-01-30T21:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.757929 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.757968 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.757978 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.757990 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.757999 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:30Z","lastTransitionTime":"2026-01-30T21:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.859887 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.859957 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.859966 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.859978 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.859987 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:30Z","lastTransitionTime":"2026-01-30T21:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.957570 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 20:09:19.543757078 +0000 UTC Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.962443 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.962469 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.962479 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.962495 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.962507 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:30Z","lastTransitionTime":"2026-01-30T21:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.975594 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:30 crc kubenswrapper[4751]: E0130 21:15:30.975770 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.065701 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.065747 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.065765 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.065788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.065804 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:31Z","lastTransitionTime":"2026-01-30T21:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.168438 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.168546 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.168566 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.168592 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.168612 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:31Z","lastTransitionTime":"2026-01-30T21:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.271192 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.271229 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.271240 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.271261 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.271273 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:31Z","lastTransitionTime":"2026-01-30T21:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.374023 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.374085 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.374101 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.374120 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.374135 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:31Z","lastTransitionTime":"2026-01-30T21:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.476481 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.476552 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.476569 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.476590 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.476608 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:31Z","lastTransitionTime":"2026-01-30T21:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.579715 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.579771 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.579789 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.579814 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.579831 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:31Z","lastTransitionTime":"2026-01-30T21:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.682599 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.682663 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.682686 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.682713 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.682733 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:31Z","lastTransitionTime":"2026-01-30T21:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.784874 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.784909 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.784920 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.784935 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.784946 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:31Z","lastTransitionTime":"2026-01-30T21:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.887287 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.887374 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.887398 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.887424 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.887444 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:31Z","lastTransitionTime":"2026-01-30T21:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.958497 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 20:54:01.221782074 +0000 UTC Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.974876 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.974945 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.974954 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:31 crc kubenswrapper[4751]: E0130 21:15:31.975066 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:31 crc kubenswrapper[4751]: E0130 21:15:31.975119 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:31 crc kubenswrapper[4751]: E0130 21:15:31.975173 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.990382 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.990472 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.990489 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.990511 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.990530 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:31Z","lastTransitionTime":"2026-01-30T21:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.991974 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.009156 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.026803 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.043685 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"876db467-5de4-469d-926f-72bd7360ff97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751b154b2de8ba6d171eac7b82c77498ed54b38d4c6759e35dacf49c57e3f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27dab33e7af8b89e8b6a3f6d3beff399121ca17e50406b83ec8a553598834ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037db6ab27fdac0f9290b2d34f883cc22ac3c79f2b52a16e6579df97474da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.061574 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.076067 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.089153 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.092112 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.092154 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.092183 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.092203 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.092214 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:32Z","lastTransitionTime":"2026-01-30T21:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.118430 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"ss-canary for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0130 21:15:18.987753 6489 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0130 21:15:18.987765 6489 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0130 21:15:18.987796 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z]\\\\nI0130 21:15:18.987648 6489 obj_retry.go:303] Retry object setup: *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.129365 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.139483 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.151994 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.165554 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.178675 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.190244 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.193819 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.193918 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.193927 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.193941 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.193949 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:32Z","lastTransitionTime":"2026-01-30T21:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.207991 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.221124 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.235787 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.296391 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.296418 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.296425 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.296439 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.296461 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:32Z","lastTransitionTime":"2026-01-30T21:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.398010 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.398063 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.398081 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.398108 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.398125 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:32Z","lastTransitionTime":"2026-01-30T21:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.500633 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.500672 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.500682 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.500698 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.500707 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:32Z","lastTransitionTime":"2026-01-30T21:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.602962 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.603019 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.603036 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.603060 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.603079 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:32Z","lastTransitionTime":"2026-01-30T21:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.705999 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.706063 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.706085 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.706117 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.706139 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:32Z","lastTransitionTime":"2026-01-30T21:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.809132 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.809165 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.809176 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.809192 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.809202 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:32Z","lastTransitionTime":"2026-01-30T21:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.911530 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.911582 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.911600 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.911623 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.911640 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:32Z","lastTransitionTime":"2026-01-30T21:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.959139 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 00:08:07.500903077 +0000 UTC Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.975536 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:32 crc kubenswrapper[4751]: E0130 21:15:32.975936 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.013749 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.013775 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.013783 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.013813 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.013822 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:33Z","lastTransitionTime":"2026-01-30T21:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.116060 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.116092 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.116101 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.116119 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.116130 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:33Z","lastTransitionTime":"2026-01-30T21:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.218858 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.218929 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.218946 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.218972 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.218993 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:33Z","lastTransitionTime":"2026-01-30T21:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.321593 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.321659 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.321681 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.321712 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.321735 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:33Z","lastTransitionTime":"2026-01-30T21:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.424673 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.424708 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.424716 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.424731 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.424740 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:33Z","lastTransitionTime":"2026-01-30T21:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.526575 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.526602 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.526610 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.526625 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.526634 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:33Z","lastTransitionTime":"2026-01-30T21:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.628612 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.628658 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.628669 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.628687 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.628703 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:33Z","lastTransitionTime":"2026-01-30T21:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.731362 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.731419 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.731435 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.731458 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.731475 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:33Z","lastTransitionTime":"2026-01-30T21:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.834259 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.834318 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.834364 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.834388 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.834406 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:33Z","lastTransitionTime":"2026-01-30T21:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.936782 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.936823 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.936832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.936850 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.936859 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:33Z","lastTransitionTime":"2026-01-30T21:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.959398 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 18:14:16.289249103 +0000 UTC Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.975615 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.975637 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:33 crc kubenswrapper[4751]: E0130 21:15:33.975718 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.975779 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:33 crc kubenswrapper[4751]: E0130 21:15:33.975866 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:33 crc kubenswrapper[4751]: E0130 21:15:33.975971 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.039882 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.039935 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.039947 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.039963 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.039973 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:34Z","lastTransitionTime":"2026-01-30T21:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.142809 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.142850 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.142859 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.142878 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.142889 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:34Z","lastTransitionTime":"2026-01-30T21:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.245469 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.245737 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.245924 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.246073 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.246230 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:34Z","lastTransitionTime":"2026-01-30T21:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.349100 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.349150 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.349166 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.349197 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.349213 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:34Z","lastTransitionTime":"2026-01-30T21:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.451085 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.451124 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.451134 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.451148 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.451157 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:34Z","lastTransitionTime":"2026-01-30T21:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.553483 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.553530 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.553542 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.553558 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.553573 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:34Z","lastTransitionTime":"2026-01-30T21:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.656277 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.656315 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.656343 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.656359 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.656373 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:34Z","lastTransitionTime":"2026-01-30T21:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.758141 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.758174 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.758184 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.758197 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.758207 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:34Z","lastTransitionTime":"2026-01-30T21:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.859951 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.860271 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.860564 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.860733 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.860896 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:34Z","lastTransitionTime":"2026-01-30T21:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.960438 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 01:39:19.892805281 +0000 UTC Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.963018 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.963084 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.963102 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.963127 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.963146 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:34Z","lastTransitionTime":"2026-01-30T21:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.969398 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:34 crc kubenswrapper[4751]: E0130 21:15:34.969690 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:34 crc kubenswrapper[4751]: E0130 21:15:34.969895 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs podName:3c30a687-0b58-4a63-b9e3-3a3624676358 nodeName:}" failed. No retries permitted until 2026-01-30 21:16:06.969867786 +0000 UTC m=+105.715690475 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs") pod "network-metrics-daemon-c477w" (UID: "3c30a687-0b58-4a63-b9e3-3a3624676358") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.975065 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:34 crc kubenswrapper[4751]: E0130 21:15:34.975163 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.976722 4751 scope.go:117] "RemoveContainer" containerID="520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab" Jan 30 21:15:34 crc kubenswrapper[4751]: E0130 21:15:34.977094 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.066375 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.066680 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.066827 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.066985 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.067106 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:35Z","lastTransitionTime":"2026-01-30T21:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.170002 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.170034 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.170045 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.170058 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.170073 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:35Z","lastTransitionTime":"2026-01-30T21:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.272470 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.272507 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.272520 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.272535 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.272545 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:35Z","lastTransitionTime":"2026-01-30T21:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.374945 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.375001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.375013 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.375031 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.375040 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:35Z","lastTransitionTime":"2026-01-30T21:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.476886 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.476921 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.476932 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.476956 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.476969 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:35Z","lastTransitionTime":"2026-01-30T21:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.578845 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.578888 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.578898 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.578912 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.578919 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:35Z","lastTransitionTime":"2026-01-30T21:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.681275 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.681400 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.681438 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.681468 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.681503 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:35Z","lastTransitionTime":"2026-01-30T21:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.786711 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.786924 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.786998 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.787075 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.787207 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:35Z","lastTransitionTime":"2026-01-30T21:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.894401 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.894536 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.894602 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.894676 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.894739 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:35Z","lastTransitionTime":"2026-01-30T21:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.961053 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 12:16:47.156281457 +0000 UTC Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.975475 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.975534 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.975490 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:35 crc kubenswrapper[4751]: E0130 21:15:35.975597 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:35 crc kubenswrapper[4751]: E0130 21:15:35.975710 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:35 crc kubenswrapper[4751]: E0130 21:15:35.975797 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.997101 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.997172 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.997195 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.997222 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.997244 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:35Z","lastTransitionTime":"2026-01-30T21:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.100132 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.100200 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.100219 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.100636 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.100660 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:36Z","lastTransitionTime":"2026-01-30T21:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.203944 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.204019 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.204042 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.204071 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.204094 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:36Z","lastTransitionTime":"2026-01-30T21:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.307108 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.307484 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.307667 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.307866 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.308040 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:36Z","lastTransitionTime":"2026-01-30T21:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.410916 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.411000 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.411027 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.411059 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.411085 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:36Z","lastTransitionTime":"2026-01-30T21:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.443281 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5sgk2_bcecdc4b-6607-4e4e-a9b5-49b85c030d21/kube-multus/0.log" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.443362 4751 generic.go:334] "Generic (PLEG): container finished" podID="bcecdc4b-6607-4e4e-a9b5-49b85c030d21" containerID="a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c" exitCode=1 Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.443404 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5sgk2" event={"ID":"bcecdc4b-6607-4e4e-a9b5-49b85c030d21","Type":"ContainerDied","Data":"a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c"} Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.443919 4751 scope.go:117] "RemoveContainer" containerID="a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.465910 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"876db467-5de4-469d-926f-72bd7360ff97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751b154b2de8ba6d171eac7b82c77498ed54b38d4c6759e35dacf49c57e3f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27dab33e7af8b89e8b6a3f6d3beff399121ca17e50406b83ec8a553598834ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037db6ab27fdac0f9290b2d34f883cc22ac3c79f2b52a16e6579df97474da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.487479 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.506743 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.514132 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.514163 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.514172 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.514186 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.514194 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:36Z","lastTransitionTime":"2026-01-30T21:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.521672 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.538421 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.554916 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.571047 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.590154 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.601659 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.617057 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.617107 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.617127 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.617150 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.617168 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:36Z","lastTransitionTime":"2026-01-30T21:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.620590 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"ss-canary for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0130 21:15:18.987753 6489 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0130 21:15:18.987765 6489 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0130 21:15:18.987796 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z]\\\\nI0130 21:15:18.987648 6489 obj_retry.go:303] Retry object setup: *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.635315 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.647617 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.660204 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.670645 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.681997 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.698264 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.713597 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:35Z\\\",\\\"message\\\":\\\"2026-01-30T21:14:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5dc3d9cf-ea3c-4a0d-90cc-2d599ddcdb3c\\\\n2026-01-30T21:14:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5dc3d9cf-ea3c-4a0d-90cc-2d599ddcdb3c to /host/opt/cni/bin/\\\\n2026-01-30T21:14:50Z [verbose] multus-daemon started\\\\n2026-01-30T21:14:50Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:15:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.719199 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.719246 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.719258 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.719275 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.719288 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:36Z","lastTransitionTime":"2026-01-30T21:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.822613 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.822664 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.822681 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.822703 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.822719 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:36Z","lastTransitionTime":"2026-01-30T21:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.927293 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.927388 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.927400 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.927417 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.927430 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:36Z","lastTransitionTime":"2026-01-30T21:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.961953 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 20:26:34.178614112 +0000 UTC Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.975386 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:36 crc kubenswrapper[4751]: E0130 21:15:36.975566 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.030025 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.030080 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.030097 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.030121 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.030139 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:37Z","lastTransitionTime":"2026-01-30T21:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.132901 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.132951 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.132967 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.133002 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.133037 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:37Z","lastTransitionTime":"2026-01-30T21:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.235681 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.235739 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.235759 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.235783 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.235807 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:37Z","lastTransitionTime":"2026-01-30T21:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.339168 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.339216 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.339229 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.339245 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.339256 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:37Z","lastTransitionTime":"2026-01-30T21:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.441954 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.442014 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.442026 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.442040 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.442049 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:37Z","lastTransitionTime":"2026-01-30T21:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.448271 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5sgk2_bcecdc4b-6607-4e4e-a9b5-49b85c030d21/kube-multus/0.log" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.448397 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5sgk2" event={"ID":"bcecdc4b-6607-4e4e-a9b5-49b85c030d21","Type":"ContainerStarted","Data":"2c6ea3db26de86b678d2306adc7f90c1d03797d9dd14847d766d709276053d02"} Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.464500 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.493082 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.512532 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.535725 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.544316 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.544411 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.544431 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.544457 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.544475 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:37Z","lastTransitionTime":"2026-01-30T21:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.551492 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.569243 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.585263 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6ea3db26de86b678d2306adc7f90c1d03797d9dd14847d766d709276053d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:35Z\\\",\\\"message\\\":\\\"2026-01-30T21:14:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5dc3d9cf-ea3c-4a0d-90cc-2d599ddcdb3c\\\\n2026-01-30T21:14:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5dc3d9cf-ea3c-4a0d-90cc-2d599ddcdb3c to /host/opt/cni/bin/\\\\n2026-01-30T21:14:50Z [verbose] multus-daemon started\\\\n2026-01-30T21:14:50Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:15:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.601054 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"876db467-5de4-469d-926f-72bd7360ff97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751b154b2de8ba6d171eac7b82c77498ed54b38d4c6759e35dacf49c57e3f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27dab33e7af8b89e8b6a3f6d3beff399121ca17e50406b83ec8a553598834ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037db6ab27fdac0f9290b2d34f883cc22ac3c79f2b52a16e6579df97474da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.615888 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.637112 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"ss-canary for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0130 21:15:18.987753 6489 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0130 21:15:18.987765 6489 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0130 21:15:18.987796 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z]\\\\nI0130 21:15:18.987648 6489 obj_retry.go:303] Retry object setup: *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.647162 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.647237 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.647261 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.647290 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.647310 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:37Z","lastTransitionTime":"2026-01-30T21:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.651441 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.662550 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.673789 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.690566 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.702830 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.718133 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.732968 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.750750 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.750792 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.750805 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.750825 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.750840 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:37Z","lastTransitionTime":"2026-01-30T21:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.853449 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.853510 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.853527 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.853550 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.853567 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:37Z","lastTransitionTime":"2026-01-30T21:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.956644 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.956688 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.956701 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.956718 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.956730 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:37Z","lastTransitionTime":"2026-01-30T21:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.962524 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 15:05:32.166042055 +0000 UTC Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.975774 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.975827 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.975781 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:37 crc kubenswrapper[4751]: E0130 21:15:37.975985 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:37 crc kubenswrapper[4751]: E0130 21:15:37.976113 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:37 crc kubenswrapper[4751]: E0130 21:15:37.976306 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.059575 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.059624 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.059639 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.059661 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.059679 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:38Z","lastTransitionTime":"2026-01-30T21:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.162658 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.162714 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.162732 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.162787 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.162807 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:38Z","lastTransitionTime":"2026-01-30T21:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.265049 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.265117 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.265134 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.265159 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.265177 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:38Z","lastTransitionTime":"2026-01-30T21:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.367886 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.367968 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.367989 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.368023 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.368047 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:38Z","lastTransitionTime":"2026-01-30T21:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.470197 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.470246 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.470259 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.470276 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.470291 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:38Z","lastTransitionTime":"2026-01-30T21:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.573691 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.573767 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.573790 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.573821 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.573844 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:38Z","lastTransitionTime":"2026-01-30T21:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.677065 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.677138 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.677157 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.677183 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.677198 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:38Z","lastTransitionTime":"2026-01-30T21:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.779731 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.779790 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.779821 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.779845 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.779862 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:38Z","lastTransitionTime":"2026-01-30T21:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.882215 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.882270 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.882286 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.882310 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.882355 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:38Z","lastTransitionTime":"2026-01-30T21:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.963232 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 10:31:46.527719174 +0000 UTC Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.975692 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:38 crc kubenswrapper[4751]: E0130 21:15:38.975933 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.985072 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.985126 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.985144 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.985171 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.985188 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:38Z","lastTransitionTime":"2026-01-30T21:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.088641 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.088710 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.088737 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.088764 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.088785 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.192004 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.192090 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.192116 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.192149 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.192174 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.224639 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.224723 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.224746 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.224777 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.224800 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: E0130 21:15:39.245729 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.250792 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.250870 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.250893 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.250927 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.250951 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: E0130 21:15:39.277007 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.282544 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.282633 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.282653 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.282675 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.282691 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: E0130 21:15:39.303355 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.308029 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.308080 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.308099 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.308121 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.308136 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: E0130 21:15:39.326375 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.331640 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.331699 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.331717 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.331740 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.331759 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: E0130 21:15:39.353990 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:39 crc kubenswrapper[4751]: E0130 21:15:39.354262 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.356477 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.356525 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.356541 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.356565 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.356582 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.459702 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.459847 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.459908 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.459932 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.459950 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.562694 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.562761 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.562779 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.562804 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.562822 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.665419 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.665514 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.665532 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.665557 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.665575 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.768450 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.768715 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.768858 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.768998 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.769140 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.872742 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.873521 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.873568 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.873599 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.873618 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.963676 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 21:33:35.228411435 +0000 UTC Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.975075 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:39 crc kubenswrapper[4751]: E0130 21:15:39.975295 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.975573 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:39 crc kubenswrapper[4751]: E0130 21:15:39.975751 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.975902 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:39 crc kubenswrapper[4751]: E0130 21:15:39.976059 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.977487 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.977536 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.977558 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.977588 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.977612 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.079874 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.079928 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.079947 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.079968 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.079986 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:40Z","lastTransitionTime":"2026-01-30T21:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.183235 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.183291 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.183307 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.183358 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.183374 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:40Z","lastTransitionTime":"2026-01-30T21:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.285487 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.285540 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.285556 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.285578 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.285594 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:40Z","lastTransitionTime":"2026-01-30T21:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.388366 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.388429 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.388446 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.388471 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.388489 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:40Z","lastTransitionTime":"2026-01-30T21:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.491806 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.491869 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.491889 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.491914 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.491937 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:40Z","lastTransitionTime":"2026-01-30T21:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.594800 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.594851 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.594871 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.594896 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.594915 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:40Z","lastTransitionTime":"2026-01-30T21:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.697829 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.697895 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.697912 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.697940 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.697961 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:40Z","lastTransitionTime":"2026-01-30T21:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.801255 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.801314 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.801373 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.801398 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.801415 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:40Z","lastTransitionTime":"2026-01-30T21:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.904695 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.904769 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.904786 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.904811 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.904831 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:40Z","lastTransitionTime":"2026-01-30T21:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.964478 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 06:30:51.798560439 +0000 UTC Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.974884 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:40 crc kubenswrapper[4751]: E0130 21:15:40.975383 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.996774 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.007702 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.007736 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.007751 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.007768 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.007781 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:41Z","lastTransitionTime":"2026-01-30T21:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.110893 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.110948 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.110974 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.111006 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.111031 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:41Z","lastTransitionTime":"2026-01-30T21:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.213508 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.213574 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.213593 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.213615 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.213631 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:41Z","lastTransitionTime":"2026-01-30T21:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.316697 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.316756 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.316775 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.316799 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.316816 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:41Z","lastTransitionTime":"2026-01-30T21:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.419724 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.419782 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.419799 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.419821 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.419836 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:41Z","lastTransitionTime":"2026-01-30T21:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.523591 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.523655 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.523672 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.523696 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.523712 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:41Z","lastTransitionTime":"2026-01-30T21:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.625937 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.625999 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.626015 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.626033 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.626044 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:41Z","lastTransitionTime":"2026-01-30T21:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.729377 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.729436 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.729455 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.729480 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.729497 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:41Z","lastTransitionTime":"2026-01-30T21:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.831825 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.831867 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.831879 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.831899 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.831911 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:41Z","lastTransitionTime":"2026-01-30T21:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.935028 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.935084 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.935099 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.935121 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.935137 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:41Z","lastTransitionTime":"2026-01-30T21:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.965132 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 05:15:57.752718565 +0000 UTC Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.975636 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.975674 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:41 crc kubenswrapper[4751]: E0130 21:15:41.975773 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.975850 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:41 crc kubenswrapper[4751]: E0130 21:15:41.975928 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:41 crc kubenswrapper[4751]: E0130 21:15:41.975993 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.990750 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.006964 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.032855 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6ea3db26de86b678d2306adc7f90c1d03797d9dd14847d766d709276053d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:35Z\\\",\\\"message\\\":\\\"2026-01-30T21:14:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5dc3d9cf-ea3c-4a0d-90cc-2d599ddcdb3c\\\\n2026-01-30T21:14:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5dc3d9cf-ea3c-4a0d-90cc-2d599ddcdb3c to /host/opt/cni/bin/\\\\n2026-01-30T21:14:50Z [verbose] multus-daemon started\\\\n2026-01-30T21:14:50Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:15:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.038175 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.038225 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.038238 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.038254 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.038641 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:42Z","lastTransitionTime":"2026-01-30T21:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.052404 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"876db467-5de4-469d-926f-72bd7360ff97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751b154b2de8ba6d171eac7b82c77498ed54b38d4c6759e35dacf49c57e3f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27dab33e7af8b89e8b6a3f6d3beff399121ca17e50406b83ec8a553598834ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037db6ab27fdac0f9290b2d34f883cc22ac3c79f2b52a16e6579df97474da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.071921 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.102158 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"ss-canary for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0130 21:15:18.987753 6489 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0130 21:15:18.987765 6489 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0130 21:15:18.987796 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z]\\\\nI0130 21:15:18.987648 6489 obj_retry.go:303] Retry object setup: *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.123095 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.140373 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.143936 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.144015 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.144042 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.144072 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.144095 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:42Z","lastTransitionTime":"2026-01-30T21:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.159553 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.179608 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.198752 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.220043 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.236054 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.248446 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.248518 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.248544 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.248570 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.248593 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:42Z","lastTransitionTime":"2026-01-30T21:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.254269 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.292170 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f993e56-5c22-4c90-970f-15faa6ea54b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f36ab607a38bfd32d8bfe64da36280f9b5efaad895c6c26880a00b9dd38ce5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7187f01f2a4bdab72ec724f553bfce1e954fd9793874021f9c28152b7d33914c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdeab12e361345755bc4e07dae7c7355ad83d93a67d27e35596c4b817e2e7699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca7bafdd301335a08edb5982410cee5965742f6b772c88c52ae3630214a4b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ae9a047d02cc4dcd6a27a4561a660059971561db33c72fdaaa10e177e091c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591aa13b2c2298e81c38fc6e0ddbf8f0c5025d86b7c40ec3c5ee4749ce6804a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://591aa13b2c2298e81c38fc6e0ddbf8f0c5025d86b7c40ec3c5ee4749ce6804a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b43a9d38e68aba1f763848cac4817d99a5f5f11f10a3f3da7ae1ec8845e90b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b43a9d38e68aba1f763848cac4817d99a5f5f11f10a3f3da7ae1ec8845e90b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://915d07a3289fc8f3a7221446ffa0562703611899bec4819f77af631ecbeb26c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://915d07a3289fc8f3a7221446ffa0562703611899bec4819f77af631ecbeb26c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.319276 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.340505 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.351996 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.352052 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.352069 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.352093 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.352110 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:42Z","lastTransitionTime":"2026-01-30T21:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.367608 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.454385 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.454436 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.454456 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.454481 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.454499 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:42Z","lastTransitionTime":"2026-01-30T21:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.558985 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.559046 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.559065 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.559093 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.559115 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:42Z","lastTransitionTime":"2026-01-30T21:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.662216 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.662280 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.662298 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.662366 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.662394 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:42Z","lastTransitionTime":"2026-01-30T21:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.765304 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.765402 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.765420 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.765442 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.765460 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:42Z","lastTransitionTime":"2026-01-30T21:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.868276 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.869273 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.869304 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.869363 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.869390 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:42Z","lastTransitionTime":"2026-01-30T21:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.965917 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 00:12:53.187991201 +0000 UTC Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.974109 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.974149 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.974169 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.974192 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.974211 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:42Z","lastTransitionTime":"2026-01-30T21:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.975194 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:42 crc kubenswrapper[4751]: E0130 21:15:42.975439 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.077818 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.077897 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.077923 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.077956 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.077980 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:43Z","lastTransitionTime":"2026-01-30T21:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.180148 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.180215 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.180233 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.180259 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.180276 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:43Z","lastTransitionTime":"2026-01-30T21:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.283024 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.283079 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.283098 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.283123 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.283141 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:43Z","lastTransitionTime":"2026-01-30T21:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.386057 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.386121 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.386138 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.386162 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.386181 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:43Z","lastTransitionTime":"2026-01-30T21:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.489168 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.489233 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.489252 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.489279 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.489299 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:43Z","lastTransitionTime":"2026-01-30T21:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.592496 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.592588 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.592612 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.592642 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.592667 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:43Z","lastTransitionTime":"2026-01-30T21:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.695953 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.696009 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.696026 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.696049 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.696064 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:43Z","lastTransitionTime":"2026-01-30T21:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.798148 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.798209 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.798226 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.798251 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.798268 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:43Z","lastTransitionTime":"2026-01-30T21:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.901186 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.901236 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.901254 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.901278 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.901294 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:43Z","lastTransitionTime":"2026-01-30T21:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.967032 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 16:29:01.757414028 +0000 UTC Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.974778 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.974823 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.974871 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:43 crc kubenswrapper[4751]: E0130 21:15:43.974979 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:43 crc kubenswrapper[4751]: E0130 21:15:43.975168 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:43 crc kubenswrapper[4751]: E0130 21:15:43.975280 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.004500 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.004583 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.004602 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.004624 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.004641 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:44Z","lastTransitionTime":"2026-01-30T21:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.107832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.107879 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.107896 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.107917 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.107964 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:44Z","lastTransitionTime":"2026-01-30T21:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.210123 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.210179 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.210196 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.210217 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.210233 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:44Z","lastTransitionTime":"2026-01-30T21:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.312700 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.312774 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.312792 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.312817 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.312835 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:44Z","lastTransitionTime":"2026-01-30T21:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.415580 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.415650 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.415670 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.415696 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.415714 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:44Z","lastTransitionTime":"2026-01-30T21:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.519013 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.519105 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.519130 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.519157 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.519177 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:44Z","lastTransitionTime":"2026-01-30T21:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.622529 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.622600 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.622619 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.622645 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.622661 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:44Z","lastTransitionTime":"2026-01-30T21:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.726183 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.726284 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.726305 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.726435 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.726457 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:44Z","lastTransitionTime":"2026-01-30T21:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.829272 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.829375 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.829394 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.829421 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.829439 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:44Z","lastTransitionTime":"2026-01-30T21:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.932480 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.932540 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.932575 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.932612 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.932696 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:44Z","lastTransitionTime":"2026-01-30T21:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.968062 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 19:00:36.149907326 +0000 UTC Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.975451 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:44 crc kubenswrapper[4751]: E0130 21:15:44.975674 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.035818 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.035890 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.035912 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.035942 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.035965 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:45Z","lastTransitionTime":"2026-01-30T21:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.138537 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.138633 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.138654 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.138676 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.138693 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:45Z","lastTransitionTime":"2026-01-30T21:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.242547 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.242599 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.242617 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.242641 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.242660 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:45Z","lastTransitionTime":"2026-01-30T21:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.345363 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.345436 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.345459 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.345489 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.345511 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:45Z","lastTransitionTime":"2026-01-30T21:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.448689 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.448737 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.448766 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.448845 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.448857 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:45Z","lastTransitionTime":"2026-01-30T21:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.553807 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.553890 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.553909 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.553937 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.553967 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:45Z","lastTransitionTime":"2026-01-30T21:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.658535 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.658610 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.658635 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.658671 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.658695 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:45Z","lastTransitionTime":"2026-01-30T21:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.761157 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.761206 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.761222 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.761249 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.761265 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:45Z","lastTransitionTime":"2026-01-30T21:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.790038 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.790185 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:49.790155807 +0000 UTC m=+148.535978496 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.790242 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.790312 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.790416 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.790507 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.790535 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.790558 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:16:49.790543617 +0000 UTC m=+148.536366306 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.790653 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:16:49.79063142 +0000 UTC m=+148.536454109 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.790778 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.790840 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.790865 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.790985 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:16:49.790948648 +0000 UTC m=+148.536771377 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.864478 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.864533 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.864550 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.864570 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.864586 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:45Z","lastTransitionTime":"2026-01-30T21:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.891600 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.891834 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.891877 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.891896 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.891983 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:16:49.891958977 +0000 UTC m=+148.637781666 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.967264 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.967357 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.967376 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.967403 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.967539 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:45Z","lastTransitionTime":"2026-01-30T21:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.992588 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 21:21:56.706503138 +0000 UTC Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.992900 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.992969 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.993036 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.993105 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.993320 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.993498 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.071489 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.071546 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.071577 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.071601 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.071618 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:46Z","lastTransitionTime":"2026-01-30T21:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.174287 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.174379 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.174398 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.174426 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.174445 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:46Z","lastTransitionTime":"2026-01-30T21:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.277389 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.277458 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.277477 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.277505 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.277527 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:46Z","lastTransitionTime":"2026-01-30T21:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.380786 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.380862 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.380879 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.380904 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.380924 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:46Z","lastTransitionTime":"2026-01-30T21:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.484020 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.484072 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.484092 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.484117 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.484135 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:46Z","lastTransitionTime":"2026-01-30T21:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.586742 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.586832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.586850 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.586875 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.586894 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:46Z","lastTransitionTime":"2026-01-30T21:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.689788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.689853 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.689869 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.689895 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.689912 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:46Z","lastTransitionTime":"2026-01-30T21:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.792681 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.792771 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.792797 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.792831 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.792857 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:46Z","lastTransitionTime":"2026-01-30T21:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.897570 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.897644 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.897665 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.897692 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.897713 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:46Z","lastTransitionTime":"2026-01-30T21:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.975548 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:46 crc kubenswrapper[4751]: E0130 21:15:46.975948 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.991372 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.993460 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 07:29:01.204800912 +0000 UTC Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.000477 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.000520 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.000532 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.000548 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.000559 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:47Z","lastTransitionTime":"2026-01-30T21:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.102729 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.102761 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.102772 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.102785 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.102795 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:47Z","lastTransitionTime":"2026-01-30T21:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.206294 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.206398 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.206419 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.206443 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.206461 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:47Z","lastTransitionTime":"2026-01-30T21:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.309730 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.309788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.309808 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.309832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.309849 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:47Z","lastTransitionTime":"2026-01-30T21:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.413111 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.413195 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.413219 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.413255 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.413280 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:47Z","lastTransitionTime":"2026-01-30T21:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.515937 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.516004 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.516022 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.516045 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.516062 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:47Z","lastTransitionTime":"2026-01-30T21:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.620817 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.620862 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.620871 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.620886 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.620900 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:47Z","lastTransitionTime":"2026-01-30T21:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.724300 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.724396 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.724415 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.724439 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.724458 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:47Z","lastTransitionTime":"2026-01-30T21:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.826932 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.826997 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.827015 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.827040 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.827058 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:47Z","lastTransitionTime":"2026-01-30T21:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.930149 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.930219 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.930242 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.930294 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.930313 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:47Z","lastTransitionTime":"2026-01-30T21:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.974907 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:47 crc kubenswrapper[4751]: E0130 21:15:47.975122 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.975217 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:47 crc kubenswrapper[4751]: E0130 21:15:47.975437 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.975474 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:47 crc kubenswrapper[4751]: E0130 21:15:47.975601 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.994015 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 08:03:52.667128567 +0000 UTC Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.033395 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.033549 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.033589 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.033619 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.033640 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:48Z","lastTransitionTime":"2026-01-30T21:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.138643 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.138789 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.138809 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.138876 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.138899 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:48Z","lastTransitionTime":"2026-01-30T21:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.242858 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.242917 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.242934 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.242959 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.242976 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:48Z","lastTransitionTime":"2026-01-30T21:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.346271 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.346378 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.346402 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.346429 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.346449 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:48Z","lastTransitionTime":"2026-01-30T21:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.449572 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.449644 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.449661 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.449688 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.449705 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:48Z","lastTransitionTime":"2026-01-30T21:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.553431 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.553481 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.553499 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.553520 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.553536 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:48Z","lastTransitionTime":"2026-01-30T21:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.656939 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.657010 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.657028 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.657056 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.657076 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:48Z","lastTransitionTime":"2026-01-30T21:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.760383 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.760443 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.760460 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.760483 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.760501 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:48Z","lastTransitionTime":"2026-01-30T21:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.863479 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.863553 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.863578 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.863606 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.863627 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:48Z","lastTransitionTime":"2026-01-30T21:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.967170 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.967247 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.967269 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.967297 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.967317 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:48Z","lastTransitionTime":"2026-01-30T21:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.975293 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:48 crc kubenswrapper[4751]: E0130 21:15:48.975519 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.994431 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 23:01:01.515163173 +0000 UTC Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.070582 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.070644 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.070660 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.070684 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.070702 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.174362 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.174429 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.174446 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.174471 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.174489 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.277481 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.277568 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.277585 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.277611 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.277629 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.381224 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.381291 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.381311 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.381364 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.381382 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.484213 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.484266 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.484279 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.484299 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.484313 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.587243 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.587373 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.587403 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.587435 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.587458 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.622734 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.622799 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.622819 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.622843 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.622862 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: E0130 21:15:49.645595 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.651210 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.651274 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.651292 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.651318 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.651362 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: E0130 21:15:49.671170 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.676013 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.676077 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.676096 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.676122 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.676140 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: E0130 21:15:49.696939 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.701674 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.701738 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.701760 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.701785 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.701806 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: E0130 21:15:49.723704 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.728658 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.728703 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.728720 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.728743 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.728761 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: E0130 21:15:49.748833 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:49 crc kubenswrapper[4751]: E0130 21:15:49.749045 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.751150 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.751230 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.751254 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.751284 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.751305 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.854944 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.855008 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.855025 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.855051 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.855068 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.958306 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.958398 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.958417 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.958441 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.958460 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.974971 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:49 crc kubenswrapper[4751]: E0130 21:15:49.975122 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.975139 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.975191 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:49 crc kubenswrapper[4751]: E0130 21:15:49.975811 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:49 crc kubenswrapper[4751]: E0130 21:15:49.975929 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.976308 4751 scope.go:117] "RemoveContainer" containerID="520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.995655 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 20:55:44.983669792 +0000 UTC Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.061632 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.061690 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.061707 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.061733 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.061751 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:50Z","lastTransitionTime":"2026-01-30T21:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.165271 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.165848 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.165868 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.165894 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.165912 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:50Z","lastTransitionTime":"2026-01-30T21:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.268489 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.268553 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.268570 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.268597 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.268614 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:50Z","lastTransitionTime":"2026-01-30T21:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.371699 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.371773 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.371796 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.371836 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.371858 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:50Z","lastTransitionTime":"2026-01-30T21:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.475016 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.475063 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.475074 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.475091 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.475104 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:50Z","lastTransitionTime":"2026-01-30T21:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.507523 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/2.log" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.509426 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerStarted","Data":"959e2d34bf4d2470d1737891bbe3d8704e887259d95ea026e0467f531587bd29"} Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.510463 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.522516 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.535811 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.545786 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.566403 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f993e56-5c22-4c90-970f-15faa6ea54b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f36ab607a38bfd32d8bfe64da36280f9b5efaad895c6c26880a00b9dd38ce5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7187f01f2a4bdab72ec724f553bfce1e954fd9793874021f9c28152b7d33914c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdeab12e361345755bc4e07dae7c7355ad83d93a67d27e35596c4b817e2e7699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca7bafdd301335a08edb5982410cee5965742f6b772c88c52ae3630214a4b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ae9a047d02cc4dcd6a27a4561a660059971561db33c72fdaaa10e177e091c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591aa13b2c2298e81c38fc6e0ddbf8f0c5025d86b7c40ec3c5ee4749ce6804a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://591aa13b2c2298e81c38fc6e0ddbf8f0c5025d86b7c40ec3c5ee4749ce6804a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b43a9d38e68aba1f763848cac4817d99a5f5f11f10a3f3da7ae1ec8845e90b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b43a9d38e68aba1f763848cac4817d99a5f5f11f10a3f3da7ae1ec8845e90b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://915d07a3289fc8f3a7221446ffa0562703611899bec4819f77af631ecbeb26c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://915d07a3289fc8f3a7221446ffa0562703611899bec4819f77af631ecbeb26c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.577273 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.577305 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.577317 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.577350 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.577361 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:50Z","lastTransitionTime":"2026-01-30T21:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.581202 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.593725 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.607316 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6ea3db26de86b678d2306adc7f90c1d03797d9dd14847d766d709276053d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:35Z\\\",\\\"message\\\":\\\"2026-01-30T21:14:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5dc3d9cf-ea3c-4a0d-90cc-2d599ddcdb3c\\\\n2026-01-30T21:14:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5dc3d9cf-ea3c-4a0d-90cc-2d599ddcdb3c to /host/opt/cni/bin/\\\\n2026-01-30T21:14:50Z [verbose] multus-daemon started\\\\n2026-01-30T21:14:50Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:15:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.617695 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8627de81-5598-4e77-b895-c17fe64fde13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe6b37689f97464405ccee9a22eff435e66be2c6103b5187255056bf0febaec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657bffa589cf69814f91728996ae779354f7ad9f62606bbba6fcc4107a06cfb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://657bffa589cf69814f91728996ae779354f7ad9f62606bbba6fcc4107a06cfb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.631919 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.649567 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"876db467-5de4-469d-926f-72bd7360ff97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751b154b2de8ba6d171eac7b82c77498ed54b38d4c6759e35dacf49c57e3f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27dab33e7af8b89e8b6a3f6d3beff399121ca17e50406b83ec8a553598834ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037db6ab27fdac0f9290b2d34f883cc22ac3c79f2b52a16e6579df97474da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.665138 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.680156 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.680197 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.680214 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.680237 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.680253 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:50Z","lastTransitionTime":"2026-01-30T21:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.689289 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.707677 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.727168 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://959e2d34bf4d2470d1737891bbe3d8704e887259d95ea026e0467f531587bd29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"ss-canary for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0130 21:15:18.987753 6489 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0130 21:15:18.987765 6489 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0130 21:15:18.987796 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z]\\\\nI0130 21:15:18.987648 6489 obj_retry.go:303] Retry object setup: *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.738104 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.748545 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.758956 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.772389 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.782685 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.782727 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.782743 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.782760 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.782772 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:50Z","lastTransitionTime":"2026-01-30T21:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.785859 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.886305 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.886415 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.886439 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.886475 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.886501 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:50Z","lastTransitionTime":"2026-01-30T21:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.975052 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:50 crc kubenswrapper[4751]: E0130 21:15:50.975314 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.989314 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.989428 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.989451 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.989482 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.989505 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:50Z","lastTransitionTime":"2026-01-30T21:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.996474 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 11:45:04.616220289 +0000 UTC Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.092707 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.092760 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.092776 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.092800 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.092818 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:51Z","lastTransitionTime":"2026-01-30T21:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.195732 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.195789 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.195806 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.195835 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.195853 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:51Z","lastTransitionTime":"2026-01-30T21:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.299534 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.299611 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.299633 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.299657 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.299675 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:51Z","lastTransitionTime":"2026-01-30T21:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.402318 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.402413 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.402431 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.402455 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.402473 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:51Z","lastTransitionTime":"2026-01-30T21:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.505569 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.505645 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.505668 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.505697 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.505714 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:51Z","lastTransitionTime":"2026-01-30T21:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.516046 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/3.log" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.517092 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/2.log" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.520974 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerID="959e2d34bf4d2470d1737891bbe3d8704e887259d95ea026e0467f531587bd29" exitCode=1 Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.521026 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"959e2d34bf4d2470d1737891bbe3d8704e887259d95ea026e0467f531587bd29"} Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.521099 4751 scope.go:117] "RemoveContainer" containerID="520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.522015 4751 scope.go:117] "RemoveContainer" containerID="959e2d34bf4d2470d1737891bbe3d8704e887259d95ea026e0467f531587bd29" Jan 30 21:15:51 crc kubenswrapper[4751]: E0130 21:15:51.522295 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.597946 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=34.59791928 podStartE2EDuration="34.59791928s" podCreationTimestamp="2026-01-30 21:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:15:51.579416062 +0000 UTC m=+90.325238751" watchObservedRunningTime="2026-01-30 21:15:51.59791928 +0000 UTC m=+90.343741969" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.609261 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.609370 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.609392 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.609419 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.609436 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:51Z","lastTransitionTime":"2026-01-30T21:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.639589 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-xdclq" podStartSLOduration=63.639555211 podStartE2EDuration="1m3.639555211s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:15:51.637934236 +0000 UTC m=+90.383756905" watchObservedRunningTime="2026-01-30 21:15:51.639555211 +0000 UTC m=+90.385377900" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.699793 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podStartSLOduration=63.699766061 podStartE2EDuration="1m3.699766061s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:15:51.699515934 +0000 UTC m=+90.445338593" watchObservedRunningTime="2026-01-30 21:15:51.699766061 +0000 UTC m=+90.445588740" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.712777 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.712818 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.712829 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.712848 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.712860 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:51Z","lastTransitionTime":"2026-01-30T21:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.731123 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-h48zj" podStartSLOduration=63.731099159 podStartE2EDuration="1m3.731099159s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:15:51.71577832 +0000 UTC m=+90.461600989" watchObservedRunningTime="2026-01-30 21:15:51.731099159 +0000 UTC m=+90.476921838" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.755526 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" podStartSLOduration=62.755498478 podStartE2EDuration="1m2.755498478s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:15:51.730865443 +0000 UTC m=+90.476688122" watchObservedRunningTime="2026-01-30 21:15:51.755498478 +0000 UTC m=+90.501321167" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.756593 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=68.756579368 podStartE2EDuration="1m8.756579368s" podCreationTimestamp="2026-01-30 21:14:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:15:51.755289513 +0000 UTC m=+90.501112172" watchObservedRunningTime="2026-01-30 21:15:51.756579368 +0000 UTC m=+90.502402067" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.798860 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=69.798826205 podStartE2EDuration="1m9.798826205s" podCreationTimestamp="2026-01-30 21:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:15:51.775939758 +0000 UTC m=+90.521762447" watchObservedRunningTime="2026-01-30 21:15:51.798826205 +0000 UTC m=+90.544648894" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.815316 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.815390 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.815407 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.815429 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.815444 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:51Z","lastTransitionTime":"2026-01-30T21:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.835392 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" podStartSLOduration=63.835372247 podStartE2EDuration="1m3.835372247s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:15:51.835349866 +0000 UTC m=+90.581172515" watchObservedRunningTime="2026-01-30 21:15:51.835372247 +0000 UTC m=+90.581194916" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.879000 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=11.878971882 podStartE2EDuration="11.878971882s" podCreationTimestamp="2026-01-30 21:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:15:51.878202302 +0000 UTC m=+90.624024961" watchObservedRunningTime="2026-01-30 21:15:51.878971882 +0000 UTC m=+90.624794561" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.917406 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.917449 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.917490 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.917512 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.917529 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:51Z","lastTransitionTime":"2026-01-30T21:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.929799 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-5sgk2" podStartSLOduration=63.929775415 podStartE2EDuration="1m3.929775415s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:15:51.928414418 +0000 UTC m=+90.674237077" watchObservedRunningTime="2026-01-30 21:15:51.929775415 +0000 UTC m=+90.675598064" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.943756 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=5.943734787 podStartE2EDuration="5.943734787s" podCreationTimestamp="2026-01-30 21:15:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:15:51.943421649 +0000 UTC m=+90.689244328" watchObservedRunningTime="2026-01-30 21:15:51.943734787 +0000 UTC m=+90.689557446" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.974964 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.974993 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.975064 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:51 crc kubenswrapper[4751]: E0130 21:15:51.976246 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:51 crc kubenswrapper[4751]: E0130 21:15:51.976497 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:51 crc kubenswrapper[4751]: E0130 21:15:51.976412 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.996826 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 17:09:07.556820723 +0000 UTC Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.020740 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.020790 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.020802 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.020820 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.020833 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:52Z","lastTransitionTime":"2026-01-30T21:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.123923 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.124270 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.124293 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.124353 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.124383 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:52Z","lastTransitionTime":"2026-01-30T21:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.227187 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.227399 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.227429 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.227463 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.227489 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:52Z","lastTransitionTime":"2026-01-30T21:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.329920 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.329975 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.329998 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.330025 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.330047 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:52Z","lastTransitionTime":"2026-01-30T21:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.433394 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.433465 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.433474 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.433512 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.433525 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:52Z","lastTransitionTime":"2026-01-30T21:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.526698 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/3.log" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.530535 4751 scope.go:117] "RemoveContainer" containerID="959e2d34bf4d2470d1737891bbe3d8704e887259d95ea026e0467f531587bd29" Jan 30 21:15:52 crc kubenswrapper[4751]: E0130 21:15:52.530807 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.535512 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.535615 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.535639 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.535669 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.535692 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:52Z","lastTransitionTime":"2026-01-30T21:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.638223 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.638269 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.638285 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.638307 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.638350 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:52Z","lastTransitionTime":"2026-01-30T21:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.741602 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.741656 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.741674 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.741696 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.741713 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:52Z","lastTransitionTime":"2026-01-30T21:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.844678 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.844725 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.844744 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.844766 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.844782 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:52Z","lastTransitionTime":"2026-01-30T21:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.947136 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.947203 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.947226 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.947254 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.947277 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:52Z","lastTransitionTime":"2026-01-30T21:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.974828 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:52 crc kubenswrapper[4751]: E0130 21:15:52.974986 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.997231 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 07:25:35.011761069 +0000 UTC Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.050470 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.050543 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.050567 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.050596 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.050629 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:53Z","lastTransitionTime":"2026-01-30T21:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.164832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.164912 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.164935 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.164964 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.164985 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:53Z","lastTransitionTime":"2026-01-30T21:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.267404 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.267466 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.267483 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.267510 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.267549 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:53Z","lastTransitionTime":"2026-01-30T21:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.369788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.369904 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.369932 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.369962 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.369986 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:53Z","lastTransitionTime":"2026-01-30T21:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.477406 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.477484 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.477505 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.477532 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.477558 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:53Z","lastTransitionTime":"2026-01-30T21:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.581248 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.581670 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.581873 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.582065 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.582254 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:53Z","lastTransitionTime":"2026-01-30T21:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.685252 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.685296 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.685313 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.685362 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.685379 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:53Z","lastTransitionTime":"2026-01-30T21:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.788478 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.788534 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.788556 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.788581 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.788600 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:53Z","lastTransitionTime":"2026-01-30T21:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.891960 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.892029 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.892047 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.892071 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.892089 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:53Z","lastTransitionTime":"2026-01-30T21:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.975104 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.975221 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.975307 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:53 crc kubenswrapper[4751]: E0130 21:15:53.975775 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:53 crc kubenswrapper[4751]: E0130 21:15:53.975917 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:53 crc kubenswrapper[4751]: E0130 21:15:53.976052 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.996099 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.996153 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.996170 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.996192 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.996209 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:53Z","lastTransitionTime":"2026-01-30T21:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.998299 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 23:53:39.269953392 +0000 UTC Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.098897 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.098976 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.099000 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.099030 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.099049 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:54Z","lastTransitionTime":"2026-01-30T21:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.202380 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.202443 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.202461 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.202489 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.202507 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:54Z","lastTransitionTime":"2026-01-30T21:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.305992 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.306063 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.306086 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.306116 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.306136 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:54Z","lastTransitionTime":"2026-01-30T21:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.409762 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.409832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.409855 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.409882 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.409899 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:54Z","lastTransitionTime":"2026-01-30T21:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.513366 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.513422 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.513434 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.513459 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.513475 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:54Z","lastTransitionTime":"2026-01-30T21:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.616487 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.616559 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.616583 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.616612 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.616630 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:54Z","lastTransitionTime":"2026-01-30T21:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.719701 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.719799 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.719820 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.719845 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.719860 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:54Z","lastTransitionTime":"2026-01-30T21:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.822981 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.823053 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.823072 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.823100 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.823118 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:54Z","lastTransitionTime":"2026-01-30T21:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.925701 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.925750 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.925766 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.925788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.925805 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:54Z","lastTransitionTime":"2026-01-30T21:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.975650 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:54 crc kubenswrapper[4751]: E0130 21:15:54.975832 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.000239 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 16:39:36.922935349 +0000 UTC Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.028569 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.028619 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.028642 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.028671 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.028693 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:55Z","lastTransitionTime":"2026-01-30T21:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.131068 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.131128 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.131151 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.131179 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.131200 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:55Z","lastTransitionTime":"2026-01-30T21:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.234006 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.234077 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.234100 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.234135 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.234159 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:55Z","lastTransitionTime":"2026-01-30T21:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.336889 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.336977 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.336995 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.337021 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.337040 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:55Z","lastTransitionTime":"2026-01-30T21:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.440272 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.440382 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.440408 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.440546 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.440575 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:55Z","lastTransitionTime":"2026-01-30T21:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.543109 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.543171 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.543188 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.543211 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.543228 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:55Z","lastTransitionTime":"2026-01-30T21:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.645755 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.645817 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.645835 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.645858 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.645879 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:55Z","lastTransitionTime":"2026-01-30T21:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.748502 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.748550 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.748561 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.748581 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.748593 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:55Z","lastTransitionTime":"2026-01-30T21:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.852411 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.852493 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.852511 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.852536 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.852553 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:55Z","lastTransitionTime":"2026-01-30T21:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.954790 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.954834 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.954845 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.954862 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.954874 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:55Z","lastTransitionTime":"2026-01-30T21:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.975498 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:55 crc kubenswrapper[4751]: E0130 21:15:55.975610 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.975628 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.975667 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:55 crc kubenswrapper[4751]: E0130 21:15:55.975761 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:55 crc kubenswrapper[4751]: E0130 21:15:55.976032 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.000879 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 02:39:57.563036608 +0000 UTC Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.058241 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.058437 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.058468 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.058500 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.058523 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:56Z","lastTransitionTime":"2026-01-30T21:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.162314 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.162412 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.162434 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.162462 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.162489 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:56Z","lastTransitionTime":"2026-01-30T21:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.267073 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.267139 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.267159 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.267188 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.267215 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:56Z","lastTransitionTime":"2026-01-30T21:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.370999 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.371041 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.371053 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.371070 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.371082 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:56Z","lastTransitionTime":"2026-01-30T21:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.474565 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.474622 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.474640 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.474665 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.474682 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:56Z","lastTransitionTime":"2026-01-30T21:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.578581 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.578642 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.578661 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.578686 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.578705 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:56Z","lastTransitionTime":"2026-01-30T21:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.681229 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.681291 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.681315 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.681385 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.681408 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:56Z","lastTransitionTime":"2026-01-30T21:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.784193 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.784253 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.784269 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.784295 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.784315 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:56Z","lastTransitionTime":"2026-01-30T21:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.888103 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.888177 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.888199 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.888229 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.888252 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:56Z","lastTransitionTime":"2026-01-30T21:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.975600 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:56 crc kubenswrapper[4751]: E0130 21:15:56.975752 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.990573 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.990650 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.990675 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.990707 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.990732 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:56Z","lastTransitionTime":"2026-01-30T21:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.001875 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 21:30:30.305306454 +0000 UTC Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.094172 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.094250 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.094276 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.094306 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.094366 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:57Z","lastTransitionTime":"2026-01-30T21:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.197554 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.197599 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.197609 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.197623 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.197633 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:57Z","lastTransitionTime":"2026-01-30T21:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.300534 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.300592 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.300614 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.300644 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.300668 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:57Z","lastTransitionTime":"2026-01-30T21:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.404389 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.404473 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.404494 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.404521 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.404541 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:57Z","lastTransitionTime":"2026-01-30T21:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.507037 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.507105 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.507129 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.507160 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.507183 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:57Z","lastTransitionTime":"2026-01-30T21:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.610057 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.610140 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.610175 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.610207 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.610229 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:57Z","lastTransitionTime":"2026-01-30T21:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.713171 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.713238 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.713257 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.713280 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.713297 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:57Z","lastTransitionTime":"2026-01-30T21:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.817206 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.817275 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.817293 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.817316 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.817367 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:57Z","lastTransitionTime":"2026-01-30T21:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.920657 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.920743 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.920777 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.920805 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.920822 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:57Z","lastTransitionTime":"2026-01-30T21:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.976043 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.976151 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:57 crc kubenswrapper[4751]: E0130 21:15:57.976214 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.976063 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:57 crc kubenswrapper[4751]: E0130 21:15:57.976463 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:57 crc kubenswrapper[4751]: E0130 21:15:57.976704 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.002623 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 17:51:37.662484531 +0000 UTC Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.022904 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.022979 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.023001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.023033 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.023057 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:58Z","lastTransitionTime":"2026-01-30T21:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.125777 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.125857 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.125875 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.125904 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.125922 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:58Z","lastTransitionTime":"2026-01-30T21:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.228561 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.228626 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.228654 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.228687 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.228712 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:58Z","lastTransitionTime":"2026-01-30T21:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.332013 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.332070 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.332086 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.332108 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.332122 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:58Z","lastTransitionTime":"2026-01-30T21:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.434877 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.434948 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.434965 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.434987 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.435005 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:58Z","lastTransitionTime":"2026-01-30T21:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.537435 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.537490 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.537506 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.537531 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.537548 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:58Z","lastTransitionTime":"2026-01-30T21:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.641049 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.641138 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.641155 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.641179 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.641197 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:58Z","lastTransitionTime":"2026-01-30T21:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.744432 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.744517 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.744547 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.744579 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.744603 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:58Z","lastTransitionTime":"2026-01-30T21:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.847945 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.848015 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.848032 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.848055 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.848072 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:58Z","lastTransitionTime":"2026-01-30T21:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.950932 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.951000 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.951023 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.951053 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.951077 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:58Z","lastTransitionTime":"2026-01-30T21:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.974724 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:58 crc kubenswrapper[4751]: E0130 21:15:58.974912 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.002795 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 16:25:53.782304828 +0000 UTC Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.053567 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.053623 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.053641 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.053662 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.053678 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:59Z","lastTransitionTime":"2026-01-30T21:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.157179 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.157250 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.157269 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.157297 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.157315 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:59Z","lastTransitionTime":"2026-01-30T21:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.267469 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.267556 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.267577 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.267603 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.267620 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:59Z","lastTransitionTime":"2026-01-30T21:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.371230 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.371275 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.371291 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.371314 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.371363 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:59Z","lastTransitionTime":"2026-01-30T21:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.474285 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.474433 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.474462 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.474502 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.474524 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:59Z","lastTransitionTime":"2026-01-30T21:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.577782 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.577876 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.577895 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.577919 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.577936 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:59Z","lastTransitionTime":"2026-01-30T21:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.681215 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.681286 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.681307 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.681372 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.681391 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:59Z","lastTransitionTime":"2026-01-30T21:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.784923 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.785017 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.785037 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.785065 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.785083 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:59Z","lastTransitionTime":"2026-01-30T21:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.871166 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.871236 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.871259 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.871288 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.871310 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:59Z","lastTransitionTime":"2026-01-30T21:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.934777 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss"] Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.935721 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.938661 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.938827 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.939035 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.939172 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.942684 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/03cfe98a-5efe-4a69-856e-1bcf960c268a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.942743 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03cfe98a-5efe-4a69-856e-1bcf960c268a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.942778 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/03cfe98a-5efe-4a69-856e-1bcf960c268a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.942816 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/03cfe98a-5efe-4a69-856e-1bcf960c268a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.942861 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03cfe98a-5efe-4a69-856e-1bcf960c268a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.975652 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:59 crc kubenswrapper[4751]: E0130 21:15:59.975901 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.975656 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:59 crc kubenswrapper[4751]: E0130 21:15:59.976018 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.975652 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:59 crc kubenswrapper[4751]: E0130 21:15:59.976103 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.003384 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 09:16:08.760725411 +0000 UTC Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.003487 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.015002 4751 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.044559 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/03cfe98a-5efe-4a69-856e-1bcf960c268a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.044623 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03cfe98a-5efe-4a69-856e-1bcf960c268a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.044657 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/03cfe98a-5efe-4a69-856e-1bcf960c268a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.044698 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/03cfe98a-5efe-4a69-856e-1bcf960c268a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.044749 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03cfe98a-5efe-4a69-856e-1bcf960c268a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.044865 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/03cfe98a-5efe-4a69-856e-1bcf960c268a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.044898 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/03cfe98a-5efe-4a69-856e-1bcf960c268a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.046083 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/03cfe98a-5efe-4a69-856e-1bcf960c268a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.066266 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03cfe98a-5efe-4a69-856e-1bcf960c268a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.084532 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03cfe98a-5efe-4a69-856e-1bcf960c268a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.259492 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.563543 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" event={"ID":"03cfe98a-5efe-4a69-856e-1bcf960c268a","Type":"ContainerStarted","Data":"73f5adb8104d10ae116d6d8493bce49ce348c1420aff1a924f15e589046e84b9"} Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.564037 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" event={"ID":"03cfe98a-5efe-4a69-856e-1bcf960c268a","Type":"ContainerStarted","Data":"4218678d613e9e169f18ff75017b1e959a1d8bbfd899f8fd6908a20155d85e25"} Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.974994 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:00 crc kubenswrapper[4751]: E0130 21:16:00.975167 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:01 crc kubenswrapper[4751]: I0130 21:16:01.974798 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:01 crc kubenswrapper[4751]: I0130 21:16:01.974848 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:01 crc kubenswrapper[4751]: E0130 21:16:01.978112 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:01 crc kubenswrapper[4751]: I0130 21:16:01.978172 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:01 crc kubenswrapper[4751]: E0130 21:16:01.978360 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:01 crc kubenswrapper[4751]: E0130 21:16:01.978612 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:02 crc kubenswrapper[4751]: I0130 21:16:02.975647 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:02 crc kubenswrapper[4751]: E0130 21:16:02.975854 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:03 crc kubenswrapper[4751]: I0130 21:16:03.975051 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:03 crc kubenswrapper[4751]: I0130 21:16:03.975231 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:03 crc kubenswrapper[4751]: I0130 21:16:03.975281 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:03 crc kubenswrapper[4751]: E0130 21:16:03.975625 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:03 crc kubenswrapper[4751]: E0130 21:16:03.975743 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:03 crc kubenswrapper[4751]: E0130 21:16:03.975905 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:04 crc kubenswrapper[4751]: I0130 21:16:04.976044 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:04 crc kubenswrapper[4751]: E0130 21:16:04.976758 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:04 crc kubenswrapper[4751]: I0130 21:16:04.976995 4751 scope.go:117] "RemoveContainer" containerID="959e2d34bf4d2470d1737891bbe3d8704e887259d95ea026e0467f531587bd29" Jan 30 21:16:04 crc kubenswrapper[4751]: E0130 21:16:04.977227 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" Jan 30 21:16:05 crc kubenswrapper[4751]: I0130 21:16:05.975685 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:05 crc kubenswrapper[4751]: I0130 21:16:05.975705 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:05 crc kubenswrapper[4751]: I0130 21:16:05.975759 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:05 crc kubenswrapper[4751]: E0130 21:16:05.977213 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:05 crc kubenswrapper[4751]: E0130 21:16:05.977269 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:05 crc kubenswrapper[4751]: E0130 21:16:05.977320 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:06 crc kubenswrapper[4751]: I0130 21:16:06.974971 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:06 crc kubenswrapper[4751]: E0130 21:16:06.975472 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:07 crc kubenswrapper[4751]: I0130 21:16:07.027728 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:07 crc kubenswrapper[4751]: E0130 21:16:07.027967 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:16:07 crc kubenswrapper[4751]: E0130 21:16:07.028042 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs podName:3c30a687-0b58-4a63-b9e3-3a3624676358 nodeName:}" failed. No retries permitted until 2026-01-30 21:17:11.028017348 +0000 UTC m=+169.773840027 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs") pod "network-metrics-daemon-c477w" (UID: "3c30a687-0b58-4a63-b9e3-3a3624676358") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:16:07 crc kubenswrapper[4751]: I0130 21:16:07.974876 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:07 crc kubenswrapper[4751]: I0130 21:16:07.974961 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:07 crc kubenswrapper[4751]: I0130 21:16:07.975092 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:07 crc kubenswrapper[4751]: E0130 21:16:07.975098 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:07 crc kubenswrapper[4751]: E0130 21:16:07.975297 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:07 crc kubenswrapper[4751]: E0130 21:16:07.975611 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:08 crc kubenswrapper[4751]: I0130 21:16:08.975362 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:08 crc kubenswrapper[4751]: E0130 21:16:08.975654 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:09 crc kubenswrapper[4751]: I0130 21:16:09.975214 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:09 crc kubenswrapper[4751]: I0130 21:16:09.975265 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:09 crc kubenswrapper[4751]: E0130 21:16:09.975433 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:09 crc kubenswrapper[4751]: I0130 21:16:09.975510 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:09 crc kubenswrapper[4751]: E0130 21:16:09.975625 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:09 crc kubenswrapper[4751]: E0130 21:16:09.975803 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:10 crc kubenswrapper[4751]: I0130 21:16:10.983808 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:10 crc kubenswrapper[4751]: E0130 21:16:10.983999 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:11 crc kubenswrapper[4751]: I0130 21:16:11.975502 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:11 crc kubenswrapper[4751]: E0130 21:16:11.975661 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:11 crc kubenswrapper[4751]: I0130 21:16:11.976611 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:11 crc kubenswrapper[4751]: I0130 21:16:11.976702 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:11 crc kubenswrapper[4751]: E0130 21:16:11.977245 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:11 crc kubenswrapper[4751]: E0130 21:16:11.977436 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:12 crc kubenswrapper[4751]: I0130 21:16:12.975700 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:12 crc kubenswrapper[4751]: E0130 21:16:12.975937 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:13 crc kubenswrapper[4751]: I0130 21:16:13.975258 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:13 crc kubenswrapper[4751]: I0130 21:16:13.975273 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:13 crc kubenswrapper[4751]: I0130 21:16:13.975392 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:13 crc kubenswrapper[4751]: E0130 21:16:13.975570 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:13 crc kubenswrapper[4751]: E0130 21:16:13.975695 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:13 crc kubenswrapper[4751]: E0130 21:16:13.975778 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:14 crc kubenswrapper[4751]: I0130 21:16:14.975545 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:14 crc kubenswrapper[4751]: E0130 21:16:14.975712 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:15 crc kubenswrapper[4751]: I0130 21:16:15.975284 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:15 crc kubenswrapper[4751]: I0130 21:16:15.975560 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:15 crc kubenswrapper[4751]: I0130 21:16:15.975853 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:15 crc kubenswrapper[4751]: E0130 21:16:15.975830 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:15 crc kubenswrapper[4751]: E0130 21:16:15.976047 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:15 crc kubenswrapper[4751]: E0130 21:16:15.976245 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:16 crc kubenswrapper[4751]: I0130 21:16:16.975314 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:16 crc kubenswrapper[4751]: E0130 21:16:16.975516 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:16 crc kubenswrapper[4751]: I0130 21:16:16.976379 4751 scope.go:117] "RemoveContainer" containerID="959e2d34bf4d2470d1737891bbe3d8704e887259d95ea026e0467f531587bd29" Jan 30 21:16:16 crc kubenswrapper[4751]: E0130 21:16:16.976620 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" Jan 30 21:16:17 crc kubenswrapper[4751]: I0130 21:16:17.976078 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:17 crc kubenswrapper[4751]: I0130 21:16:17.976096 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:17 crc kubenswrapper[4751]: I0130 21:16:17.976138 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:17 crc kubenswrapper[4751]: E0130 21:16:17.976303 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:17 crc kubenswrapper[4751]: E0130 21:16:17.976542 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:17 crc kubenswrapper[4751]: E0130 21:16:17.976717 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:18 crc kubenswrapper[4751]: I0130 21:16:18.975046 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:18 crc kubenswrapper[4751]: E0130 21:16:18.975221 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:19 crc kubenswrapper[4751]: I0130 21:16:19.975641 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:19 crc kubenswrapper[4751]: I0130 21:16:19.975723 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:19 crc kubenswrapper[4751]: E0130 21:16:19.975870 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:19 crc kubenswrapper[4751]: I0130 21:16:19.976198 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:19 crc kubenswrapper[4751]: E0130 21:16:19.976312 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:19 crc kubenswrapper[4751]: E0130 21:16:19.976709 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:20 crc kubenswrapper[4751]: I0130 21:16:20.975602 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:20 crc kubenswrapper[4751]: E0130 21:16:20.975822 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:21 crc kubenswrapper[4751]: I0130 21:16:21.975580 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:21 crc kubenswrapper[4751]: I0130 21:16:21.975621 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:21 crc kubenswrapper[4751]: E0130 21:16:21.977663 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:21 crc kubenswrapper[4751]: I0130 21:16:21.977939 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:21 crc kubenswrapper[4751]: E0130 21:16:21.977950 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:21 crc kubenswrapper[4751]: E0130 21:16:21.978560 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:22 crc kubenswrapper[4751]: E0130 21:16:22.011705 4751 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 30 21:16:22 crc kubenswrapper[4751]: E0130 21:16:22.100511 4751 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:16:22 crc kubenswrapper[4751]: I0130 21:16:22.647652 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5sgk2_bcecdc4b-6607-4e4e-a9b5-49b85c030d21/kube-multus/1.log" Jan 30 21:16:22 crc kubenswrapper[4751]: I0130 21:16:22.648565 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5sgk2_bcecdc4b-6607-4e4e-a9b5-49b85c030d21/kube-multus/0.log" Jan 30 21:16:22 crc kubenswrapper[4751]: I0130 21:16:22.648640 4751 generic.go:334] "Generic (PLEG): container finished" podID="bcecdc4b-6607-4e4e-a9b5-49b85c030d21" containerID="2c6ea3db26de86b678d2306adc7f90c1d03797d9dd14847d766d709276053d02" exitCode=1 Jan 30 21:16:22 crc kubenswrapper[4751]: I0130 21:16:22.648682 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5sgk2" event={"ID":"bcecdc4b-6607-4e4e-a9b5-49b85c030d21","Type":"ContainerDied","Data":"2c6ea3db26de86b678d2306adc7f90c1d03797d9dd14847d766d709276053d02"} Jan 30 21:16:22 crc kubenswrapper[4751]: I0130 21:16:22.648727 4751 scope.go:117] "RemoveContainer" containerID="a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c" Jan 30 21:16:22 crc kubenswrapper[4751]: I0130 21:16:22.649366 4751 scope.go:117] "RemoveContainer" containerID="2c6ea3db26de86b678d2306adc7f90c1d03797d9dd14847d766d709276053d02" Jan 30 21:16:22 crc kubenswrapper[4751]: E0130 21:16:22.649619 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-5sgk2_openshift-multus(bcecdc4b-6607-4e4e-a9b5-49b85c030d21)\"" pod="openshift-multus/multus-5sgk2" podUID="bcecdc4b-6607-4e4e-a9b5-49b85c030d21" Jan 30 21:16:22 crc kubenswrapper[4751]: I0130 21:16:22.678126 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" podStartSLOduration=94.678071844 podStartE2EDuration="1m34.678071844s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:00.584776197 +0000 UTC m=+99.330598936" watchObservedRunningTime="2026-01-30 21:16:22.678071844 +0000 UTC m=+121.423894533" Jan 30 21:16:22 crc kubenswrapper[4751]: I0130 21:16:22.975168 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:22 crc kubenswrapper[4751]: E0130 21:16:22.975395 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:23 crc kubenswrapper[4751]: I0130 21:16:23.655283 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5sgk2_bcecdc4b-6607-4e4e-a9b5-49b85c030d21/kube-multus/1.log" Jan 30 21:16:23 crc kubenswrapper[4751]: I0130 21:16:23.975211 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:23 crc kubenswrapper[4751]: I0130 21:16:23.975318 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:23 crc kubenswrapper[4751]: I0130 21:16:23.975223 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:23 crc kubenswrapper[4751]: E0130 21:16:23.975689 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:23 crc kubenswrapper[4751]: E0130 21:16:23.975826 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:23 crc kubenswrapper[4751]: E0130 21:16:23.975553 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:24 crc kubenswrapper[4751]: I0130 21:16:24.975378 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:24 crc kubenswrapper[4751]: E0130 21:16:24.976387 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:25 crc kubenswrapper[4751]: I0130 21:16:25.975692 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:25 crc kubenswrapper[4751]: E0130 21:16:25.975850 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:25 crc kubenswrapper[4751]: I0130 21:16:25.975698 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:25 crc kubenswrapper[4751]: E0130 21:16:25.975968 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:25 crc kubenswrapper[4751]: I0130 21:16:25.976046 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:25 crc kubenswrapper[4751]: E0130 21:16:25.976123 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:26 crc kubenswrapper[4751]: I0130 21:16:26.975072 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:26 crc kubenswrapper[4751]: E0130 21:16:26.975262 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:27 crc kubenswrapper[4751]: E0130 21:16:27.101874 4751 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:16:27 crc kubenswrapper[4751]: I0130 21:16:27.976158 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:27 crc kubenswrapper[4751]: I0130 21:16:27.976309 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:27 crc kubenswrapper[4751]: I0130 21:16:27.976302 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:27 crc kubenswrapper[4751]: E0130 21:16:27.976456 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:27 crc kubenswrapper[4751]: E0130 21:16:27.976824 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:27 crc kubenswrapper[4751]: E0130 21:16:27.976980 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:28 crc kubenswrapper[4751]: I0130 21:16:28.975838 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:28 crc kubenswrapper[4751]: E0130 21:16:28.976043 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:29 crc kubenswrapper[4751]: I0130 21:16:29.974824 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:29 crc kubenswrapper[4751]: I0130 21:16:29.974821 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:29 crc kubenswrapper[4751]: I0130 21:16:29.975160 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:29 crc kubenswrapper[4751]: E0130 21:16:29.975143 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:29 crc kubenswrapper[4751]: E0130 21:16:29.975413 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:29 crc kubenswrapper[4751]: E0130 21:16:29.975539 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:30 crc kubenswrapper[4751]: I0130 21:16:30.975193 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:30 crc kubenswrapper[4751]: E0130 21:16:30.975429 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:31 crc kubenswrapper[4751]: I0130 21:16:31.975157 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:31 crc kubenswrapper[4751]: E0130 21:16:31.977129 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:31 crc kubenswrapper[4751]: I0130 21:16:31.977275 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:31 crc kubenswrapper[4751]: I0130 21:16:31.977356 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:31 crc kubenswrapper[4751]: E0130 21:16:31.977870 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:31 crc kubenswrapper[4751]: E0130 21:16:31.978047 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:31 crc kubenswrapper[4751]: I0130 21:16:31.978493 4751 scope.go:117] "RemoveContainer" containerID="959e2d34bf4d2470d1737891bbe3d8704e887259d95ea026e0467f531587bd29" Jan 30 21:16:32 crc kubenswrapper[4751]: E0130 21:16:32.102734 4751 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:16:32 crc kubenswrapper[4751]: I0130 21:16:32.691430 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/3.log" Jan 30 21:16:32 crc kubenswrapper[4751]: I0130 21:16:32.694789 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerStarted","Data":"54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982"} Jan 30 21:16:32 crc kubenswrapper[4751]: I0130 21:16:32.695261 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:16:32 crc kubenswrapper[4751]: I0130 21:16:32.743036 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podStartSLOduration=104.74301404 podStartE2EDuration="1m44.74301404s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:32.742600829 +0000 UTC m=+131.488423538" watchObservedRunningTime="2026-01-30 21:16:32.74301404 +0000 UTC m=+131.488836729" Jan 30 21:16:32 crc kubenswrapper[4751]: I0130 21:16:32.892807 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-c477w"] Jan 30 21:16:32 crc kubenswrapper[4751]: I0130 21:16:32.892943 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:32 crc kubenswrapper[4751]: E0130 21:16:32.893033 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:33 crc kubenswrapper[4751]: I0130 21:16:33.978606 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:33 crc kubenswrapper[4751]: I0130 21:16:33.978606 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:33 crc kubenswrapper[4751]: I0130 21:16:33.978760 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:33 crc kubenswrapper[4751]: E0130 21:16:33.979693 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:33 crc kubenswrapper[4751]: E0130 21:16:33.979855 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:33 crc kubenswrapper[4751]: E0130 21:16:33.980292 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:34 crc kubenswrapper[4751]: I0130 21:16:34.975112 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:34 crc kubenswrapper[4751]: E0130 21:16:34.975519 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:35 crc kubenswrapper[4751]: I0130 21:16:35.975248 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:35 crc kubenswrapper[4751]: I0130 21:16:35.975309 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:35 crc kubenswrapper[4751]: E0130 21:16:35.975516 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:35 crc kubenswrapper[4751]: E0130 21:16:35.975666 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:35 crc kubenswrapper[4751]: I0130 21:16:35.976557 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:35 crc kubenswrapper[4751]: E0130 21:16:35.976843 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:36 crc kubenswrapper[4751]: I0130 21:16:36.974774 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:36 crc kubenswrapper[4751]: E0130 21:16:36.975011 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:37 crc kubenswrapper[4751]: E0130 21:16:37.103462 4751 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:16:37 crc kubenswrapper[4751]: I0130 21:16:37.974881 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:37 crc kubenswrapper[4751]: I0130 21:16:37.974911 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:37 crc kubenswrapper[4751]: I0130 21:16:37.974999 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:37 crc kubenswrapper[4751]: E0130 21:16:37.975044 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:37 crc kubenswrapper[4751]: E0130 21:16:37.975134 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:37 crc kubenswrapper[4751]: E0130 21:16:37.975439 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:37 crc kubenswrapper[4751]: I0130 21:16:37.975807 4751 scope.go:117] "RemoveContainer" containerID="2c6ea3db26de86b678d2306adc7f90c1d03797d9dd14847d766d709276053d02" Jan 30 21:16:38 crc kubenswrapper[4751]: I0130 21:16:38.718766 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5sgk2_bcecdc4b-6607-4e4e-a9b5-49b85c030d21/kube-multus/1.log" Jan 30 21:16:38 crc kubenswrapper[4751]: I0130 21:16:38.718867 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5sgk2" event={"ID":"bcecdc4b-6607-4e4e-a9b5-49b85c030d21","Type":"ContainerStarted","Data":"83b2f589d316b2b21ef50ee0174ac43309d977d8244dba740216ca2dd67db344"} Jan 30 21:16:38 crc kubenswrapper[4751]: I0130 21:16:38.974820 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:38 crc kubenswrapper[4751]: E0130 21:16:38.975086 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:39 crc kubenswrapper[4751]: I0130 21:16:39.975647 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:39 crc kubenswrapper[4751]: I0130 21:16:39.975689 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:39 crc kubenswrapper[4751]: I0130 21:16:39.975689 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:39 crc kubenswrapper[4751]: E0130 21:16:39.975835 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:39 crc kubenswrapper[4751]: E0130 21:16:39.976144 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:39 crc kubenswrapper[4751]: E0130 21:16:39.976408 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:40 crc kubenswrapper[4751]: I0130 21:16:40.975222 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:40 crc kubenswrapper[4751]: E0130 21:16:40.975531 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:41 crc kubenswrapper[4751]: I0130 21:16:41.975276 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:41 crc kubenswrapper[4751]: E0130 21:16:41.975501 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:41 crc kubenswrapper[4751]: I0130 21:16:41.975769 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:41 crc kubenswrapper[4751]: E0130 21:16:41.977807 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:41 crc kubenswrapper[4751]: I0130 21:16:41.977852 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:41 crc kubenswrapper[4751]: E0130 21:16:41.977983 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:42 crc kubenswrapper[4751]: I0130 21:16:42.975696 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:42 crc kubenswrapper[4751]: I0130 21:16:42.978629 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 21:16:42 crc kubenswrapper[4751]: I0130 21:16:42.981492 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 21:16:43 crc kubenswrapper[4751]: I0130 21:16:43.975958 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:43 crc kubenswrapper[4751]: I0130 21:16:43.976198 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:43 crc kubenswrapper[4751]: I0130 21:16:43.976315 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:43 crc kubenswrapper[4751]: I0130 21:16:43.979255 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 21:16:43 crc kubenswrapper[4751]: I0130 21:16:43.979815 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 21:16:43 crc kubenswrapper[4751]: I0130 21:16:43.980081 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 21:16:43 crc kubenswrapper[4751]: I0130 21:16:43.980843 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 21:16:49 crc kubenswrapper[4751]: I0130 21:16:49.811951 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:49 crc kubenswrapper[4751]: I0130 21:16:49.812123 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:49 crc kubenswrapper[4751]: I0130 21:16:49.812166 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:49 crc kubenswrapper[4751]: I0130 21:16:49.812222 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:49 crc kubenswrapper[4751]: E0130 21:16:49.813765 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:18:51.81372493 +0000 UTC m=+270.559547619 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:49 crc kubenswrapper[4751]: I0130 21:16:49.814608 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:49 crc kubenswrapper[4751]: I0130 21:16:49.821435 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:49 crc kubenswrapper[4751]: I0130 21:16:49.821633 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:49 crc kubenswrapper[4751]: I0130 21:16:49.913815 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:49 crc kubenswrapper[4751]: I0130 21:16:49.918521 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:50 crc kubenswrapper[4751]: I0130 21:16:50.000648 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:50 crc kubenswrapper[4751]: I0130 21:16:50.020582 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:50 crc kubenswrapper[4751]: I0130 21:16:50.030702 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:50 crc kubenswrapper[4751]: W0130 21:16:50.354880 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-727a24abfc6bee03bd6244b50025642bd0da9aee41457573e4c40e2d5ba4578c WatchSource:0}: Error finding container 727a24abfc6bee03bd6244b50025642bd0da9aee41457573e4c40e2d5ba4578c: Status 404 returned error can't find the container with id 727a24abfc6bee03bd6244b50025642bd0da9aee41457573e4c40e2d5ba4578c Jan 30 21:16:50 crc kubenswrapper[4751]: W0130 21:16:50.355736 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-687c61c41b15949ac0b4bcc29898828f27a92c0d9a00299b29e187cd71555288 WatchSource:0}: Error finding container 687c61c41b15949ac0b4bcc29898828f27a92c0d9a00299b29e187cd71555288: Status 404 returned error can't find the container with id 687c61c41b15949ac0b4bcc29898828f27a92c0d9a00299b29e187cd71555288 Jan 30 21:16:50 crc kubenswrapper[4751]: I0130 21:16:50.773317 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4846ea91681b12b397a16f38132275101045c23a0d3e5e9dba986e645f244544"} Jan 30 21:16:50 crc kubenswrapper[4751]: I0130 21:16:50.773488 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"687c61c41b15949ac0b4bcc29898828f27a92c0d9a00299b29e187cd71555288"} Jan 30 21:16:50 crc kubenswrapper[4751]: I0130 21:16:50.773816 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:50 crc kubenswrapper[4751]: I0130 21:16:50.775799 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c202d23928b1157b9956dbb2cccc152128915eb12dce8495093c51e98b2c8d24"} Jan 30 21:16:50 crc kubenswrapper[4751]: I0130 21:16:50.775880 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"727a24abfc6bee03bd6244b50025642bd0da9aee41457573e4c40e2d5ba4578c"} Jan 30 21:16:50 crc kubenswrapper[4751]: I0130 21:16:50.778398 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3ba19b1bad242d06c2107c879585907e0de0a87cee578c9543f6d2a667e133a1"} Jan 30 21:16:50 crc kubenswrapper[4751]: I0130 21:16:50.778455 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"40cd9d502644359f6d3f370ebb3f673f5f463ca6bac4f1911d1c8e539a8f92c7"} Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.024018 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.091583 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-nk5rn"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.092358 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.100597 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-plkp9"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.102919 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8jsqt"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.103746 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.104067 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.118213 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.119566 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.119752 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.120038 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.120299 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.120486 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.120634 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.120811 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.126306 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.126992 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.127519 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.128126 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.128619 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.129095 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.129597 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.129950 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.130116 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.130167 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.130237 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.130287 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.130415 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.130516 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.130525 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.130583 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.130650 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.130693 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.130785 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.130653 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.137402 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9wvms"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.137964 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.139977 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.140549 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.140635 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-x98hg"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.142069 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.143718 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-7bw65"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.144057 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.145684 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.145834 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.145984 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.146116 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.147084 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-8l2v5"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.147664 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-8l2v5" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.150987 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6dcxn"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.151474 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.152400 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.152540 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.152850 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.153473 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.154133 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.154412 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.154689 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.154937 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.155625 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.155783 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.155927 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.156250 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6gckm"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.156356 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.156683 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.158308 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.158869 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.164395 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.164657 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.164684 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.164702 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.164801 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.164849 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.164861 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.164667 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.164920 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.164948 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.164973 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.164979 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.195739 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.200936 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.201158 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.201397 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.201537 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.201729 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.201865 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.201993 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.202118 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.202234 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.203121 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.203271 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.203502 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.199310 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9lsr5"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.206167 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.207893 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.208864 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.209024 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.209127 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.209220 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.209450 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.209580 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.208476 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.209898 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.210542 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.210919 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.210994 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.211095 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.211152 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.211238 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.211354 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.211452 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.211536 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.211647 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.211735 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.211821 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.211931 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.212021 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.211932 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.220427 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.220557 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.220840 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.220857 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.220866 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.220879 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.221946 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.221957 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zpvhg"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.222560 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.222581 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-w42cs"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.223146 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.223269 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.223539 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mp5g5"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.224089 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.224265 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.224297 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mp5g5" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.226259 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.226757 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.227185 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.230767 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.233839 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.234298 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.234975 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235015 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235123 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235200 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235250 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235685 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ebb4c857-4f54-440f-81d7-74eadc588099-images\") pod \"machine-api-operator-5694c8668f-nk5rn\" (UID: \"ebb4c857-4f54-440f-81d7-74eadc588099\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235714 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-config\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235731 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235750 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/80b9c760-3c34-42cb-bb23-1f11dad50e58-available-featuregates\") pod \"openshift-config-operator-7777fb866f-x98hg\" (UID: \"80b9c760-3c34-42cb-bb23-1f11dad50e58\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235766 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-audit\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235781 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-trusted-ca-bundle\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235798 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gppkm\" (UniqueName: \"kubernetes.io/projected/6d872f03-d4d0-49bc-9758-05060035dafa-kube-api-access-gppkm\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235812 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx729\" (UniqueName: \"kubernetes.io/projected/322809f5-4f4c-487e-8488-6c62bac86f8f-kube-api-access-kx729\") pod \"route-controller-manager-6576b87f9c-8z9vp\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235827 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pmck\" (UniqueName: \"kubernetes.io/projected/5c9671c2-84f9-4719-b497-4fa77803105b-kube-api-access-9pmck\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235844 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-trusted-ca-bundle\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235861 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3639569-5d39-4fa1-863c-45307b3da476-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cqk7w\" (UID: \"f3639569-5d39-4fa1-863c-45307b3da476\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235875 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-image-import-ca\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235894 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebb4c857-4f54-440f-81d7-74eadc588099-config\") pod \"machine-api-operator-5694c8668f-nk5rn\" (UID: \"ebb4c857-4f54-440f-81d7-74eadc588099\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235910 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/322809f5-4f4c-487e-8488-6c62bac86f8f-serving-cert\") pod \"route-controller-manager-6576b87f9c-8z9vp\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235924 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4ed27b8-56e7-4e93-aea6-83adae8affb6-serving-cert\") pod \"console-operator-58897d9998-6gckm\" (UID: \"b4ed27b8-56e7-4e93-aea6-83adae8affb6\") " pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235941 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c9671c2-84f9-4719-b497-4fa77803105b-serving-cert\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235957 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d872f03-d4d0-49bc-9758-05060035dafa-serving-cert\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235974 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/322809f5-4f4c-487e-8488-6c62bac86f8f-config\") pod \"route-controller-manager-6576b87f9c-8z9vp\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235990 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c9671c2-84f9-4719-b497-4fa77803105b-service-ca-bundle\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.236007 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-oauth-serving-cert\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.236021 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6d872f03-d4d0-49bc-9758-05060035dafa-encryption-config\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.236029 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.236280 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.237070 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.237218 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vkfk8"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.237879 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vkfk8" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238392 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.236036 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfsf5\" (UniqueName: \"kubernetes.io/projected/f3639569-5d39-4fa1-863c-45307b3da476-kube-api-access-zfsf5\") pod \"openshift-apiserver-operator-796bbdcf4f-cqk7w\" (UID: \"f3639569-5d39-4fa1-863c-45307b3da476\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238579 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238577 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7b579d29-157b-4ff2-b623-d4af8fd6a8fe-auth-proxy-config\") pod \"machine-approver-56656f9798-m5g9r\" (UID: \"7b579d29-157b-4ff2-b623-d4af8fd6a8fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238722 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtnfh\" (UniqueName: \"kubernetes.io/projected/61e09136-e0d4-4c75-ad01-543778867411-kube-api-access-wtnfh\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238755 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/322809f5-4f4c-487e-8488-6c62bac86f8f-client-ca\") pod \"route-controller-manager-6576b87f9c-8z9vp\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238773 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7b579d29-157b-4ff2-b623-d4af8fd6a8fe-machine-approver-tls\") pod \"machine-approver-56656f9798-m5g9r\" (UID: \"7b579d29-157b-4ff2-b623-d4af8fd6a8fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238788 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7xpm\" (UniqueName: \"kubernetes.io/projected/b4ed27b8-56e7-4e93-aea6-83adae8affb6-kube-api-access-v7xpm\") pod \"console-operator-58897d9998-6gckm\" (UID: \"b4ed27b8-56e7-4e93-aea6-83adae8affb6\") " pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238805 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4vh8\" (UniqueName: \"kubernetes.io/projected/ebb4c857-4f54-440f-81d7-74eadc588099-kube-api-access-t4vh8\") pod \"machine-api-operator-5694c8668f-nk5rn\" (UID: \"ebb4c857-4f54-440f-81d7-74eadc588099\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238821 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-serving-cert\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238837 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrwzw\" (UniqueName: \"kubernetes.io/projected/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-kube-api-access-rrwzw\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238870 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ebb4c857-4f54-440f-81d7-74eadc588099-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-nk5rn\" (UID: \"ebb4c857-4f54-440f-81d7-74eadc588099\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238919 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4ed27b8-56e7-4e93-aea6-83adae8affb6-trusted-ca\") pod \"console-operator-58897d9998-6gckm\" (UID: \"b4ed27b8-56e7-4e93-aea6-83adae8affb6\") " pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238948 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6d872f03-d4d0-49bc-9758-05060035dafa-node-pullsecrets\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238964 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-client-ca\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238979 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b579d29-157b-4ff2-b623-d4af8fd6a8fe-config\") pod \"machine-approver-56656f9798-m5g9r\" (UID: \"7b579d29-157b-4ff2-b623-d4af8fd6a8fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238995 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-oauth-config\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.239021 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61e09136-e0d4-4c75-ad01-543778867411-serving-cert\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.239037 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8k2d\" (UniqueName: \"kubernetes.io/projected/7b579d29-157b-4ff2-b623-d4af8fd6a8fe-kube-api-access-n8k2d\") pod \"machine-approver-56656f9798-m5g9r\" (UID: \"7b579d29-157b-4ff2-b623-d4af8fd6a8fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.239061 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4ed27b8-56e7-4e93-aea6-83adae8affb6-config\") pod \"console-operator-58897d9998-6gckm\" (UID: \"b4ed27b8-56e7-4e93-aea6-83adae8affb6\") " pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.239078 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80b9c760-3c34-42cb-bb23-1f11dad50e58-serving-cert\") pod \"openshift-config-operator-7777fb866f-x98hg\" (UID: \"80b9c760-3c34-42cb-bb23-1f11dad50e58\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.239096 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-config\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.239137 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c9671c2-84f9-4719-b497-4fa77803105b-config\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.239207 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d872f03-d4d0-49bc-9758-05060035dafa-audit-dir\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.239247 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-service-ca\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.239268 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpjkx\" (UniqueName: \"kubernetes.io/projected/21dc9dc0-702d-49a7-baed-f8e70f6867f3-kube-api-access-mpjkx\") pod \"downloads-7954f5f757-8l2v5\" (UID: \"21dc9dc0-702d-49a7-baed-f8e70f6867f3\") " pod="openshift-console/downloads-7954f5f757-8l2v5" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.239385 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6d872f03-d4d0-49bc-9758-05060035dafa-etcd-client\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.239480 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c9671c2-84f9-4719-b497-4fa77803105b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.240059 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-config\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.240106 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-etcd-serving-ca\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.240126 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3639569-5d39-4fa1-863c-45307b3da476-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cqk7w\" (UID: \"f3639569-5d39-4fa1-863c-45307b3da476\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.240144 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w78rj\" (UniqueName: \"kubernetes.io/projected/80b9c760-3c34-42cb-bb23-1f11dad50e58-kube-api-access-w78rj\") pod \"openshift-config-operator-7777fb866f-x98hg\" (UID: \"80b9c760-3c34-42cb-bb23-1f11dad50e58\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.241274 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.250359 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.250851 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.253596 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.254071 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.266651 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.268504 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-l4lnd"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.269046 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-l4lnd" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.269515 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.269813 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.289658 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.289814 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-nk5rn"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.290397 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.291208 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.297908 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.300372 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tr6kv"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.303291 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.304570 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.307227 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.307476 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zzk29"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.308285 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.308922 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.309567 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.310157 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qthvh"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.310884 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.311602 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.312181 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.312846 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.313424 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.314103 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.317705 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.318178 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.318659 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.319120 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.319756 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-m2zrs"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.320396 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-m2zrs" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.320560 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.321697 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-plkp9"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.323301 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8jsqt"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.327549 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-x98hg"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.333839 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.335404 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.336925 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.337791 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.339950 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tr6kv"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.340724 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6d872f03-d4d0-49bc-9758-05060035dafa-etcd-client\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.340764 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c9671c2-84f9-4719-b497-4fa77803105b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.340787 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-config\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.340810 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-etcd-serving-ca\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.340832 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3639569-5d39-4fa1-863c-45307b3da476-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cqk7w\" (UID: \"f3639569-5d39-4fa1-863c-45307b3da476\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.340854 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w78rj\" (UniqueName: \"kubernetes.io/projected/80b9c760-3c34-42cb-bb23-1f11dad50e58-kube-api-access-w78rj\") pod \"openshift-config-operator-7777fb866f-x98hg\" (UID: \"80b9c760-3c34-42cb-bb23-1f11dad50e58\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.340876 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/80b9c760-3c34-42cb-bb23-1f11dad50e58-available-featuregates\") pod \"openshift-config-operator-7777fb866f-x98hg\" (UID: \"80b9c760-3c34-42cb-bb23-1f11dad50e58\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.340898 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ebb4c857-4f54-440f-81d7-74eadc588099-images\") pod \"machine-api-operator-5694c8668f-nk5rn\" (UID: \"ebb4c857-4f54-440f-81d7-74eadc588099\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.340921 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-config\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.340940 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.340960 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-audit\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.340980 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-trusted-ca-bundle\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341003 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gppkm\" (UniqueName: \"kubernetes.io/projected/6d872f03-d4d0-49bc-9758-05060035dafa-kube-api-access-gppkm\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341028 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx729\" (UniqueName: \"kubernetes.io/projected/322809f5-4f4c-487e-8488-6c62bac86f8f-kube-api-access-kx729\") pod \"route-controller-manager-6576b87f9c-8z9vp\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341049 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pmck\" (UniqueName: \"kubernetes.io/projected/5c9671c2-84f9-4719-b497-4fa77803105b-kube-api-access-9pmck\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341071 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-trusted-ca-bundle\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341096 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3639569-5d39-4fa1-863c-45307b3da476-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cqk7w\" (UID: \"f3639569-5d39-4fa1-863c-45307b3da476\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341119 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-image-import-ca\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341143 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebb4c857-4f54-440f-81d7-74eadc588099-config\") pod \"machine-api-operator-5694c8668f-nk5rn\" (UID: \"ebb4c857-4f54-440f-81d7-74eadc588099\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341164 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/322809f5-4f4c-487e-8488-6c62bac86f8f-serving-cert\") pod \"route-controller-manager-6576b87f9c-8z9vp\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341185 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4ed27b8-56e7-4e93-aea6-83adae8affb6-serving-cert\") pod \"console-operator-58897d9998-6gckm\" (UID: \"b4ed27b8-56e7-4e93-aea6-83adae8affb6\") " pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341204 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c9671c2-84f9-4719-b497-4fa77803105b-serving-cert\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341225 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-oauth-serving-cert\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341247 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d872f03-d4d0-49bc-9758-05060035dafa-serving-cert\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341266 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/322809f5-4f4c-487e-8488-6c62bac86f8f-config\") pod \"route-controller-manager-6576b87f9c-8z9vp\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341285 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c9671c2-84f9-4719-b497-4fa77803105b-service-ca-bundle\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341307 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6d872f03-d4d0-49bc-9758-05060035dafa-encryption-config\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341346 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfsf5\" (UniqueName: \"kubernetes.io/projected/f3639569-5d39-4fa1-863c-45307b3da476-kube-api-access-zfsf5\") pod \"openshift-apiserver-operator-796bbdcf4f-cqk7w\" (UID: \"f3639569-5d39-4fa1-863c-45307b3da476\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341370 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7b579d29-157b-4ff2-b623-d4af8fd6a8fe-auth-proxy-config\") pod \"machine-approver-56656f9798-m5g9r\" (UID: \"7b579d29-157b-4ff2-b623-d4af8fd6a8fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341393 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7xpm\" (UniqueName: \"kubernetes.io/projected/b4ed27b8-56e7-4e93-aea6-83adae8affb6-kube-api-access-v7xpm\") pod \"console-operator-58897d9998-6gckm\" (UID: \"b4ed27b8-56e7-4e93-aea6-83adae8affb6\") " pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341417 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtnfh\" (UniqueName: \"kubernetes.io/projected/61e09136-e0d4-4c75-ad01-543778867411-kube-api-access-wtnfh\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341439 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/322809f5-4f4c-487e-8488-6c62bac86f8f-client-ca\") pod \"route-controller-manager-6576b87f9c-8z9vp\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341461 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7b579d29-157b-4ff2-b623-d4af8fd6a8fe-machine-approver-tls\") pod \"machine-approver-56656f9798-m5g9r\" (UID: \"7b579d29-157b-4ff2-b623-d4af8fd6a8fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341485 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4vh8\" (UniqueName: \"kubernetes.io/projected/ebb4c857-4f54-440f-81d7-74eadc588099-kube-api-access-t4vh8\") pod \"machine-api-operator-5694c8668f-nk5rn\" (UID: \"ebb4c857-4f54-440f-81d7-74eadc588099\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341506 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-serving-cert\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341528 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrwzw\" (UniqueName: \"kubernetes.io/projected/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-kube-api-access-rrwzw\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341560 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ebb4c857-4f54-440f-81d7-74eadc588099-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-nk5rn\" (UID: \"ebb4c857-4f54-440f-81d7-74eadc588099\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341585 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4ed27b8-56e7-4e93-aea6-83adae8affb6-trusted-ca\") pod \"console-operator-58897d9998-6gckm\" (UID: \"b4ed27b8-56e7-4e93-aea6-83adae8affb6\") " pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341609 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6d872f03-d4d0-49bc-9758-05060035dafa-node-pullsecrets\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341630 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-client-ca\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341651 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b579d29-157b-4ff2-b623-d4af8fd6a8fe-config\") pod \"machine-approver-56656f9798-m5g9r\" (UID: \"7b579d29-157b-4ff2-b623-d4af8fd6a8fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341670 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-oauth-config\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341691 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61e09136-e0d4-4c75-ad01-543778867411-serving-cert\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341713 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8k2d\" (UniqueName: \"kubernetes.io/projected/7b579d29-157b-4ff2-b623-d4af8fd6a8fe-kube-api-access-n8k2d\") pod \"machine-approver-56656f9798-m5g9r\" (UID: \"7b579d29-157b-4ff2-b623-d4af8fd6a8fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341736 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4ed27b8-56e7-4e93-aea6-83adae8affb6-config\") pod \"console-operator-58897d9998-6gckm\" (UID: \"b4ed27b8-56e7-4e93-aea6-83adae8affb6\") " pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341755 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80b9c760-3c34-42cb-bb23-1f11dad50e58-serving-cert\") pod \"openshift-config-operator-7777fb866f-x98hg\" (UID: \"80b9c760-3c34-42cb-bb23-1f11dad50e58\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341776 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-config\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341797 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c9671c2-84f9-4719-b497-4fa77803105b-config\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341820 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d872f03-d4d0-49bc-9758-05060035dafa-audit-dir\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341843 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpjkx\" (UniqueName: \"kubernetes.io/projected/21dc9dc0-702d-49a7-baed-f8e70f6867f3-kube-api-access-mpjkx\") pod \"downloads-7954f5f757-8l2v5\" (UID: \"21dc9dc0-702d-49a7-baed-f8e70f6867f3\") " pod="openshift-console/downloads-7954f5f757-8l2v5" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341867 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-service-ca\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.342301 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-7bw65"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.342790 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-oauth-serving-cert\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.343051 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-service-ca\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.343076 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-audit\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.343479 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-8l2v5"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.343763 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-image-import-ca\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.343925 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6dcxn"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.343986 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3639569-5d39-4fa1-863c-45307b3da476-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cqk7w\" (UID: \"f3639569-5d39-4fa1-863c-45307b3da476\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.344545 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebb4c857-4f54-440f-81d7-74eadc588099-config\") pod \"machine-api-operator-5694c8668f-nk5rn\" (UID: \"ebb4c857-4f54-440f-81d7-74eadc588099\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.344992 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.346054 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-config\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.346509 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-trusted-ca-bundle\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.346782 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d872f03-d4d0-49bc-9758-05060035dafa-serving-cert\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.347917 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/80b9c760-3c34-42cb-bb23-1f11dad50e58-available-featuregates\") pod \"openshift-config-operator-7777fb866f-x98hg\" (UID: \"80b9c760-3c34-42cb-bb23-1f11dad50e58\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.348224 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6d872f03-d4d0-49bc-9758-05060035dafa-node-pullsecrets\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.348907 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-client-ca\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.349728 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-etcd-serving-ca\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.349950 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b579d29-157b-4ff2-b623-d4af8fd6a8fe-config\") pod \"machine-approver-56656f9798-m5g9r\" (UID: \"7b579d29-157b-4ff2-b623-d4af8fd6a8fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.350371 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ebb4c857-4f54-440f-81d7-74eadc588099-images\") pod \"machine-api-operator-5694c8668f-nk5rn\" (UID: \"ebb4c857-4f54-440f-81d7-74eadc588099\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.350436 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d872f03-d4d0-49bc-9758-05060035dafa-audit-dir\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.350605 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c9671c2-84f9-4719-b497-4fa77803105b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.351102 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-trusted-ca-bundle\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.351291 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4ed27b8-56e7-4e93-aea6-83adae8affb6-trusted-ca\") pod \"console-operator-58897d9998-6gckm\" (UID: \"b4ed27b8-56e7-4e93-aea6-83adae8affb6\") " pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.351773 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c9671c2-84f9-4719-b497-4fa77803105b-config\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.351875 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.352184 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c9671c2-84f9-4719-b497-4fa77803105b-service-ca-bundle\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.352525 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/322809f5-4f4c-487e-8488-6c62bac86f8f-config\") pod \"route-controller-manager-6576b87f9c-8z9vp\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.354450 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-config\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.354773 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/322809f5-4f4c-487e-8488-6c62bac86f8f-client-ca\") pod \"route-controller-manager-6576b87f9c-8z9vp\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.355687 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4ed27b8-56e7-4e93-aea6-83adae8affb6-serving-cert\") pod \"console-operator-58897d9998-6gckm\" (UID: \"b4ed27b8-56e7-4e93-aea6-83adae8affb6\") " pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.355922 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80b9c760-3c34-42cb-bb23-1f11dad50e58-serving-cert\") pod \"openshift-config-operator-7777fb866f-x98hg\" (UID: \"80b9c760-3c34-42cb-bb23-1f11dad50e58\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.356319 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/322809f5-4f4c-487e-8488-6c62bac86f8f-serving-cert\") pod \"route-controller-manager-6576b87f9c-8z9vp\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.356640 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6d872f03-d4d0-49bc-9758-05060035dafa-etcd-client\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.356958 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-config\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.357049 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7b579d29-157b-4ff2-b623-d4af8fd6a8fe-auth-proxy-config\") pod \"machine-approver-56656f9798-m5g9r\" (UID: \"7b579d29-157b-4ff2-b623-d4af8fd6a8fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.358273 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-serving-cert\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.358277 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.360534 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4ed27b8-56e7-4e93-aea6-83adae8affb6-config\") pod \"console-operator-58897d9998-6gckm\" (UID: \"b4ed27b8-56e7-4e93-aea6-83adae8affb6\") " pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.361542 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6d872f03-d4d0-49bc-9758-05060035dafa-encryption-config\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.361917 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61e09136-e0d4-4c75-ad01-543778867411-serving-cert\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.361962 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7b579d29-157b-4ff2-b623-d4af8fd6a8fe-machine-approver-tls\") pod \"machine-approver-56656f9798-m5g9r\" (UID: \"7b579d29-157b-4ff2-b623-d4af8fd6a8fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.366066 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.367149 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zpvhg"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.368107 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.369056 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.373990 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.375966 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3639569-5d39-4fa1-863c-45307b3da476-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cqk7w\" (UID: \"f3639569-5d39-4fa1-863c-45307b3da476\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.377492 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c9671c2-84f9-4719-b497-4fa77803105b-serving-cert\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.378355 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.378867 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ebb4c857-4f54-440f-81d7-74eadc588099-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-nk5rn\" (UID: \"ebb4c857-4f54-440f-81d7-74eadc588099\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.378995 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-oauth-config\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.379572 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9lsr5"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.380395 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.384361 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9wvms"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.384471 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.385480 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.387178 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vkfk8"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.388651 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6gckm"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.390226 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.391414 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mp5g5"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.391662 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.392815 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-l4lnd"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.393864 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-hrfwj"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.394386 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.394652 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hrfwj" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.397239 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zzk29"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.398491 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.409272 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.411148 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.413059 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-m2zrs"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.414787 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.415060 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qthvh"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.416770 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.418817 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.420892 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hrfwj"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.423604 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-2hvtm"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.424319 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2hvtm" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.424831 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tw9q7"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.426249 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.426280 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tw9q7"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.434439 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.453457 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.474805 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.494565 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.514711 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.534622 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.554624 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.574026 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.597630 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.614690 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.634719 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.655251 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.674796 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.714574 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.735511 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.755584 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.774574 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.795738 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.815036 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.836313 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.855028 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.886556 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.894648 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.915261 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.934897 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.954726 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.976198 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.995143 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.015364 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.034693 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.061084 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.075019 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.095754 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.114579 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.136770 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.155137 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.175686 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.195315 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.215354 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.234174 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.274897 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.293409 4751 request.go:700] Waited for 1.00109133s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-98p87&limit=500&resourceVersion=0 Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.295362 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.314721 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.334718 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.354942 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.375281 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.395682 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.427554 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.435837 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.455766 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.474988 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.495760 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.515604 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.535087 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.555440 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.576072 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.595264 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.614839 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.635202 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.655687 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.675279 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.695069 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.715456 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.734839 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.755723 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.775679 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.795532 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.815523 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.835149 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.855451 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.876709 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.896654 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.914725 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.935833 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.954730 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.974346 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.022752 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w78rj\" (UniqueName: \"kubernetes.io/projected/80b9c760-3c34-42cb-bb23-1f11dad50e58-kube-api-access-w78rj\") pod \"openshift-config-operator-7777fb866f-x98hg\" (UID: \"80b9c760-3c34-42cb-bb23-1f11dad50e58\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.044205 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gppkm\" (UniqueName: \"kubernetes.io/projected/6d872f03-d4d0-49bc-9758-05060035dafa-kube-api-access-gppkm\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.068980 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx729\" (UniqueName: \"kubernetes.io/projected/322809f5-4f4c-487e-8488-6c62bac86f8f-kube-api-access-kx729\") pod \"route-controller-manager-6576b87f9c-8z9vp\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.086908 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pmck\" (UniqueName: \"kubernetes.io/projected/5c9671c2-84f9-4719-b497-4fa77803105b-kube-api-access-9pmck\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.099030 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.101312 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtnfh\" (UniqueName: \"kubernetes.io/projected/61e09136-e0d4-4c75-ad01-543778867411-kube-api-access-wtnfh\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.121833 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4vh8\" (UniqueName: \"kubernetes.io/projected/ebb4c857-4f54-440f-81d7-74eadc588099-kube-api-access-t4vh8\") pod \"machine-api-operator-5694c8668f-nk5rn\" (UID: \"ebb4c857-4f54-440f-81d7-74eadc588099\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.143303 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpjkx\" (UniqueName: \"kubernetes.io/projected/21dc9dc0-702d-49a7-baed-f8e70f6867f3-kube-api-access-mpjkx\") pod \"downloads-7954f5f757-8l2v5\" (UID: \"21dc9dc0-702d-49a7-baed-f8e70f6867f3\") " pod="openshift-console/downloads-7954f5f757-8l2v5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.157306 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrwzw\" (UniqueName: \"kubernetes.io/projected/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-kube-api-access-rrwzw\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.192032 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8k2d\" (UniqueName: \"kubernetes.io/projected/7b579d29-157b-4ff2-b623-d4af8fd6a8fe-kube-api-access-n8k2d\") pod \"machine-approver-56656f9798-m5g9r\" (UID: \"7b579d29-157b-4ff2-b623-d4af8fd6a8fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.201809 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7xpm\" (UniqueName: \"kubernetes.io/projected/b4ed27b8-56e7-4e93-aea6-83adae8affb6-kube-api-access-v7xpm\") pod \"console-operator-58897d9998-6gckm\" (UID: \"b4ed27b8-56e7-4e93-aea6-83adae8affb6\") " pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.215306 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.221476 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfsf5\" (UniqueName: \"kubernetes.io/projected/f3639569-5d39-4fa1-863c-45307b3da476-kube-api-access-zfsf5\") pod \"openshift-apiserver-operator-796bbdcf4f-cqk7w\" (UID: \"f3639569-5d39-4fa1-863c-45307b3da476\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.233455 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.237874 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.247778 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.255560 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.264209 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.275262 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.294483 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.307656 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.312958 4751 request.go:700] Waited for 1.888323945s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&limit=500&resourceVersion=0 Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.314268 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.322000 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-x98hg"] Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.324544 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.336920 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 21:16:53 crc kubenswrapper[4751]: W0130 21:16:53.342304 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b579d29_157b_4ff2_b623_d4af8fd6a8fe.slice/crio-f6f99673aa6bb0aad2f3c4036b7f5f6b83610d6ebe270ac0c27a7c0d5e533722 WatchSource:0}: Error finding container f6f99673aa6bb0aad2f3c4036b7f5f6b83610d6ebe270ac0c27a7c0d5e533722: Status 404 returned error can't find the container with id f6f99673aa6bb0aad2f3c4036b7f5f6b83610d6ebe270ac0c27a7c0d5e533722 Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.352702 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.354890 4751 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 21:16:53 crc kubenswrapper[4751]: W0130 21:16:53.363650 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80b9c760_3c34_42cb_bb23_1f11dad50e58.slice/crio-5d290ac6257102448a66da1338c6ff7602f0050bddbafd5f5dfec0384a0c4312 WatchSource:0}: Error finding container 5d290ac6257102448a66da1338c6ff7602f0050bddbafd5f5dfec0384a0c4312: Status 404 returned error can't find the container with id 5d290ac6257102448a66da1338c6ff7602f0050bddbafd5f5dfec0384a0c4312 Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.367858 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.374952 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.408023 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.416908 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-8l2v5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469165 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c37a69a-9a13-400f-bfff-0886b6062725-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4x88q\" (UID: \"9c37a69a-9a13-400f-bfff-0886b6062725\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469220 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469246 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dfb29a82-8be0-4219-81b1-fecfcb4e1061-default-certificate\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469270 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/555517ab-ec2d-4534-8cc4-3ecbcdda7a1b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zbt4w\" (UID: \"555517ab-ec2d-4534-8cc4-3ecbcdda7a1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469302 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469343 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c37a69a-9a13-400f-bfff-0886b6062725-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4x88q\" (UID: \"9c37a69a-9a13-400f-bfff-0886b6062725\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469400 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73d0a80a-e569-428a-b251-33f28e06fffd-registry-certificates\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469424 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/aea78078-eab1-4c82-b072-e6b65f959815-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469449 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrqzz\" (UniqueName: \"kubernetes.io/projected/9c37a69a-9a13-400f-bfff-0886b6062725-kube-api-access-xrqzz\") pod \"kube-storage-version-migrator-operator-b67b599dd-4x88q\" (UID: \"9c37a69a-9a13-400f-bfff-0886b6062725\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469484 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/af4fa723-4cc5-4fa1-9162-fa20b958fa29-etcd-ca\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469887 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9fpj\" (UniqueName: \"kubernetes.io/projected/80a7c9c5-51fd-457c-a16b-c7ad90f92811-kube-api-access-s9fpj\") pod \"dns-operator-744455d44c-mp5g5\" (UID: \"80a7c9c5-51fd-457c-a16b-c7ad90f92811\") " pod="openshift-dns-operator/dns-operator-744455d44c-mp5g5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469950 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73d0a80a-e569-428a-b251-33f28e06fffd-trusted-ca\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469980 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-audit-policies\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469997 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470015 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aea78078-eab1-4c82-b072-e6b65f959815-audit-dir\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470031 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/542e69b1-7290-4693-b85b-5c9566314a51-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-p6hjc\" (UID: \"542e69b1-7290-4693-b85b-5c9566314a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470050 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470067 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-registry-tls\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470081 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/aea78078-eab1-4c82-b072-e6b65f959815-etcd-client\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470097 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aea78078-eab1-4c82-b072-e6b65f959815-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470114 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stgvv\" (UniqueName: \"kubernetes.io/projected/555517ab-ec2d-4534-8cc4-3ecbcdda7a1b-kube-api-access-stgvv\") pod \"openshift-controller-manager-operator-756b6f6bc6-zbt4w\" (UID: \"555517ab-ec2d-4534-8cc4-3ecbcdda7a1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470137 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470161 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/80a7c9c5-51fd-457c-a16b-c7ad90f92811-metrics-tls\") pod \"dns-operator-744455d44c-mp5g5\" (UID: \"80a7c9c5-51fd-457c-a16b-c7ad90f92811\") " pod="openshift-dns-operator/dns-operator-744455d44c-mp5g5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470178 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/af4fa723-4cc5-4fa1-9162-fa20b958fa29-etcd-client\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470193 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfb29a82-8be0-4219-81b1-fecfcb4e1061-service-ca-bundle\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470225 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/542e69b1-7290-4693-b85b-5c9566314a51-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-p6hjc\" (UID: \"542e69b1-7290-4693-b85b-5c9566314a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470255 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdfzb\" (UniqueName: \"kubernetes.io/projected/af4fa723-4cc5-4fa1-9162-fa20b958fa29-kube-api-access-hdfzb\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470271 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470307 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/dfb29a82-8be0-4219-81b1-fecfcb4e1061-stats-auth\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470341 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfn6c\" (UniqueName: \"kubernetes.io/projected/542e69b1-7290-4693-b85b-5c9566314a51-kube-api-access-gfn6c\") pod \"cluster-image-registry-operator-dc59b4c8b-p6hjc\" (UID: \"542e69b1-7290-4693-b85b-5c9566314a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470363 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73d0a80a-e569-428a-b251-33f28e06fffd-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470378 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkplm\" (UniqueName: \"kubernetes.io/projected/8a52a543-c530-48d9-a046-ac4008df0477-kube-api-access-qkplm\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470392 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxbcq\" (UniqueName: \"kubernetes.io/projected/dfb29a82-8be0-4219-81b1-fecfcb4e1061-kube-api-access-dxbcq\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470675 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470697 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/aea78078-eab1-4c82-b072-e6b65f959815-encryption-config\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470712 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/555517ab-ec2d-4534-8cc4-3ecbcdda7a1b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zbt4w\" (UID: \"555517ab-ec2d-4534-8cc4-3ecbcdda7a1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470733 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470775 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-bound-sa-token\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470794 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2nhc\" (UniqueName: \"kubernetes.io/projected/8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6-kube-api-access-n2nhc\") pod \"cluster-samples-operator-665b6dd947-chjdb\" (UID: \"8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470846 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470895 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73d0a80a-e569-428a-b251-33f28e06fffd-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470912 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470937 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470952 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxthv\" (UniqueName: \"kubernetes.io/projected/aea78078-eab1-4c82-b072-e6b65f959815-kube-api-access-kxthv\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: E0130 21:16:53.471171 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:53.971159815 +0000 UTC m=+152.716982464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.471353 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/af4fa723-4cc5-4fa1-9162-fa20b958fa29-etcd-service-ca\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.471381 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aea78078-eab1-4c82-b072-e6b65f959815-audit-policies\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.471429 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aea78078-eab1-4c82-b072-e6b65f959815-serving-cert\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.471502 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfb29a82-8be0-4219-81b1-fecfcb4e1061-metrics-certs\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.471546 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a52a543-c530-48d9-a046-ac4008df0477-audit-dir\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.471572 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.471603 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af4fa723-4cc5-4fa1-9162-fa20b958fa29-serving-cert\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.471655 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-chjdb\" (UID: \"8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.471704 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cp5d\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-kube-api-access-2cp5d\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.471727 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af4fa723-4cc5-4fa1-9162-fa20b958fa29-config\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.471741 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/542e69b1-7290-4693-b85b-5c9566314a51-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-p6hjc\" (UID: \"542e69b1-7290-4693-b85b-5c9566314a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.490361 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.573704 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.573928 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-bound-sa-token\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.573961 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.573988 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c1f866c2-11a6-4c9b-8d42-54e5f0a18195-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-l4lnd\" (UID: \"c1f866c2-11a6-4c9b-8d42-54e5f0a18195\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l4lnd" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574010 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj26f\" (UniqueName: \"kubernetes.io/projected/cabe4321-bcb6-4d9e-905f-ab26bbc11b86-kube-api-access-xj26f\") pod \"packageserver-d55dfcdfc-nldk6\" (UID: \"cabe4321-bcb6-4d9e-905f-ab26bbc11b86\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574030 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5d92565-846d-43a6-92e2-02351fec2f63-config\") pod \"service-ca-operator-777779d784-qthvh\" (UID: \"d5d92565-846d-43a6-92e2-02351fec2f63\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574054 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2nhc\" (UniqueName: \"kubernetes.io/projected/8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6-kube-api-access-n2nhc\") pod \"cluster-samples-operator-665b6dd947-chjdb\" (UID: \"8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574078 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxthv\" (UniqueName: \"kubernetes.io/projected/aea78078-eab1-4c82-b072-e6b65f959815-kube-api-access-kxthv\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574099 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cabe4321-bcb6-4d9e-905f-ab26bbc11b86-webhook-cert\") pod \"packageserver-d55dfcdfc-nldk6\" (UID: \"cabe4321-bcb6-4d9e-905f-ab26bbc11b86\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574121 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-registration-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574140 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clhl7\" (UniqueName: \"kubernetes.io/projected/6167bc7b-37d7-493c-93a9-dda69bedad76-kube-api-access-clhl7\") pod \"olm-operator-6b444d44fb-z2l88\" (UID: \"6167bc7b-37d7-493c-93a9-dda69bedad76\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574160 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/af4fa723-4cc5-4fa1-9162-fa20b958fa29-etcd-service-ca\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574180 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/357257a0-2b96-4833-84cb-1c4326c34e61-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xf2m8\" (UID: \"357257a0-2b96-4833-84cb-1c4326c34e61\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574213 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aea78078-eab1-4c82-b072-e6b65f959815-serving-cert\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574249 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cabe4321-bcb6-4d9e-905f-ab26bbc11b86-tmpfs\") pod \"packageserver-d55dfcdfc-nldk6\" (UID: \"cabe4321-bcb6-4d9e-905f-ab26bbc11b86\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574266 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxkk4\" (UniqueName: \"kubernetes.io/projected/8141131d-95f7-4103-bd2d-24630fc8e9b6-kube-api-access-wxkk4\") pod \"ingress-canary-m2zrs\" (UID: \"8141131d-95f7-4103-bd2d-24630fc8e9b6\") " pod="openshift-ingress-canary/ingress-canary-m2zrs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574283 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c79ee24d-6dc8-4c72-911a-9ea6810a9f9a-signing-cabundle\") pod \"service-ca-9c57cc56f-zzk29\" (UID: \"c79ee24d-6dc8-4c72-911a-9ea6810a9f9a\") " pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574302 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af4fa723-4cc5-4fa1-9162-fa20b958fa29-serving-cert\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574337 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65e79489-5e6b-421c-8019-b1d5161a0341-config\") pod \"kube-controller-manager-operator-78b949d7b-ffw4d\" (UID: \"65e79489-5e6b-421c-8019-b1d5161a0341\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574356 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4d9e5b29-c71c-4129-bd91-ccb81940c815-proxy-tls\") pod \"machine-config-operator-74547568cd-tc4zf\" (UID: \"4d9e5b29-c71c-4129-bd91-ccb81940c815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574372 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8141131d-95f7-4103-bd2d-24630fc8e9b6-cert\") pod \"ingress-canary-m2zrs\" (UID: \"8141131d-95f7-4103-bd2d-24630fc8e9b6\") " pod="openshift-ingress-canary/ingress-canary-m2zrs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574397 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-socket-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574416 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/542e69b1-7290-4693-b85b-5c9566314a51-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-p6hjc\" (UID: \"542e69b1-7290-4693-b85b-5c9566314a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574434 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cp5d\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-kube-api-access-2cp5d\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574463 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c37a69a-9a13-400f-bfff-0886b6062725-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4x88q\" (UID: \"9c37a69a-9a13-400f-bfff-0886b6062725\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574480 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aae80f4-df3d-4545-8a9b-5a840e379b65-config\") pod \"kube-apiserver-operator-766d6c64bb-xz8vz\" (UID: \"7aae80f4-df3d-4545-8a9b-5a840e379b65\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574505 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574520 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/555517ab-ec2d-4534-8cc4-3ecbcdda7a1b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zbt4w\" (UID: \"555517ab-ec2d-4534-8cc4-3ecbcdda7a1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574543 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dfb29a82-8be0-4219-81b1-fecfcb4e1061-default-certificate\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574559 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6167bc7b-37d7-493c-93a9-dda69bedad76-srv-cert\") pod \"olm-operator-6b444d44fb-z2l88\" (UID: \"6167bc7b-37d7-493c-93a9-dda69bedad76\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574577 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-plugins-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574592 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a50ebf9b-a11e-47ac-828c-f1858be195d7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-n94s8\" (UID: \"a50ebf9b-a11e-47ac-828c-f1858be195d7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574615 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhbm8\" (UniqueName: \"kubernetes.io/projected/130dae88-caa3-4e75-b3fc-d3b6dcd5b577-kube-api-access-hhbm8\") pod \"catalog-operator-68c6474976-mw25p\" (UID: \"130dae88-caa3-4e75-b3fc-d3b6dcd5b577\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574630 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-csi-data-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574645 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73d0a80a-e569-428a-b251-33f28e06fffd-registry-certificates\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574660 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/aea78078-eab1-4c82-b072-e6b65f959815-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574677 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/236e4954-0baf-4d9e-b36f-eed37707af26-node-bootstrap-token\") pod \"machine-config-server-2hvtm\" (UID: \"236e4954-0baf-4d9e-b36f-eed37707af26\") " pod="openshift-machine-config-operator/machine-config-server-2hvtm" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574698 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a50ebf9b-a11e-47ac-828c-f1858be195d7-trusted-ca\") pod \"ingress-operator-5b745b69d9-n94s8\" (UID: \"a50ebf9b-a11e-47ac-828c-f1858be195d7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574718 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/af4fa723-4cc5-4fa1-9162-fa20b958fa29-etcd-ca\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574735 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-audit-policies\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574751 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574769 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9fpj\" (UniqueName: \"kubernetes.io/projected/80a7c9c5-51fd-457c-a16b-c7ad90f92811-kube-api-access-s9fpj\") pod \"dns-operator-744455d44c-mp5g5\" (UID: \"80a7c9c5-51fd-457c-a16b-c7ad90f92811\") " pod="openshift-dns-operator/dns-operator-744455d44c-mp5g5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574782 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/542e69b1-7290-4693-b85b-5c9566314a51-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-p6hjc\" (UID: \"542e69b1-7290-4693-b85b-5c9566314a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574798 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6167bc7b-37d7-493c-93a9-dda69bedad76-profile-collector-cert\") pod \"olm-operator-6b444d44fb-z2l88\" (UID: \"6167bc7b-37d7-493c-93a9-dda69bedad76\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574816 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73d0a80a-e569-428a-b251-33f28e06fffd-trusted-ca\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574832 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbjww\" (UniqueName: \"kubernetes.io/projected/236e4954-0baf-4d9e-b36f-eed37707af26-kube-api-access-lbjww\") pod \"machine-config-server-2hvtm\" (UID: \"236e4954-0baf-4d9e-b36f-eed37707af26\") " pod="openshift-machine-config-operator/machine-config-server-2hvtm" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574847 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aea78078-eab1-4c82-b072-e6b65f959815-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574900 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-mountpoint-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574915 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8944fd86-eca1-4882-896d-1cd3faa4b418-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hmjv6\" (UID: \"8944fd86-eca1-4882-896d-1cd3faa4b418\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574932 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5243a1a5-2eaa-4437-b10e-602439c7c838-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tr6kv\" (UID: \"5243a1a5-2eaa-4437-b10e-602439c7c838\") " pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574947 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/20cd63ce-b8cf-45fa-9d89-d917cff2894b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-99fpp\" (UID: \"20cd63ce-b8cf-45fa-9d89-d917cff2894b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574962 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shckb\" (UniqueName: \"kubernetes.io/projected/c1f866c2-11a6-4c9b-8d42-54e5f0a18195-kube-api-access-shckb\") pod \"multus-admission-controller-857f4d67dd-l4lnd\" (UID: \"c1f866c2-11a6-4c9b-8d42-54e5f0a18195\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l4lnd" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574976 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/130dae88-caa3-4e75-b3fc-d3b6dcd5b577-srv-cert\") pod \"catalog-operator-68c6474976-mw25p\" (UID: \"130dae88-caa3-4e75-b3fc-d3b6dcd5b577\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574991 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/542e69b1-7290-4693-b85b-5c9566314a51-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-p6hjc\" (UID: \"542e69b1-7290-4693-b85b-5c9566314a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575015 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdfzb\" (UniqueName: \"kubernetes.io/projected/af4fa723-4cc5-4fa1-9162-fa20b958fa29-kube-api-access-hdfzb\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575031 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc9ed63a-23a2-4b50-a290-0409ff14fd95-config-volume\") pod \"collect-profiles-29496795-lg25p\" (UID: \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575048 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/dfb29a82-8be0-4219-81b1-fecfcb4e1061-stats-auth\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575109 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfn6c\" (UniqueName: \"kubernetes.io/projected/542e69b1-7290-4693-b85b-5c9566314a51-kube-api-access-gfn6c\") pod \"cluster-image-registry-operator-dc59b4c8b-p6hjc\" (UID: \"542e69b1-7290-4693-b85b-5c9566314a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575125 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fdq8\" (UniqueName: \"kubernetes.io/projected/4d9e5b29-c71c-4129-bd91-ccb81940c815-kube-api-access-5fdq8\") pod \"machine-config-operator-74547568cd-tc4zf\" (UID: \"4d9e5b29-c71c-4129-bd91-ccb81940c815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575141 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47wdx\" (UniqueName: \"kubernetes.io/projected/cc9ed63a-23a2-4b50-a290-0409ff14fd95-kube-api-access-47wdx\") pod \"collect-profiles-29496795-lg25p\" (UID: \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575157 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ttsm\" (UniqueName: \"kubernetes.io/projected/d5d92565-846d-43a6-92e2-02351fec2f63-kube-api-access-5ttsm\") pod \"service-ca-operator-777779d784-qthvh\" (UID: \"d5d92565-846d-43a6-92e2-02351fec2f63\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575191 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc9ed63a-23a2-4b50-a290-0409ff14fd95-secret-volume\") pod \"collect-profiles-29496795-lg25p\" (UID: \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575224 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5243a1a5-2eaa-4437-b10e-602439c7c838-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tr6kv\" (UID: \"5243a1a5-2eaa-4437-b10e-602439c7c838\") " pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575242 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkplm\" (UniqueName: \"kubernetes.io/projected/8a52a543-c530-48d9-a046-ac4008df0477-kube-api-access-qkplm\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575259 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65e79489-5e6b-421c-8019-b1d5161a0341-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ffw4d\" (UID: \"65e79489-5e6b-421c-8019-b1d5161a0341\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575274 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-792m2\" (UniqueName: \"kubernetes.io/projected/20cd63ce-b8cf-45fa-9d89-d917cff2894b-kube-api-access-792m2\") pod \"machine-config-controller-84d6567774-99fpp\" (UID: \"20cd63ce-b8cf-45fa-9d89-d917cff2894b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575291 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a9930c0b-af24-4e39-b8e6-199a40779aff-metrics-tls\") pod \"dns-default-hrfwj\" (UID: \"a9930c0b-af24-4e39-b8e6-199a40779aff\") " pod="openshift-dns/dns-default-hrfwj" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575305 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmhw4\" (UniqueName: \"kubernetes.io/projected/357257a0-2b96-4833-84cb-1c4326c34e61-kube-api-access-pmhw4\") pod \"control-plane-machine-set-operator-78cbb6b69f-xf2m8\" (UID: \"357257a0-2b96-4833-84cb-1c4326c34e61\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575344 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73d0a80a-e569-428a-b251-33f28e06fffd-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575360 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxbcq\" (UniqueName: \"kubernetes.io/projected/dfb29a82-8be0-4219-81b1-fecfcb4e1061-kube-api-access-dxbcq\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575374 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/aea78078-eab1-4c82-b072-e6b65f959815-encryption-config\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575391 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575407 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6cq6\" (UniqueName: \"kubernetes.io/projected/a50ebf9b-a11e-47ac-828c-f1858be195d7-kube-api-access-k6cq6\") pod \"ingress-operator-5b745b69d9-n94s8\" (UID: \"a50ebf9b-a11e-47ac-828c-f1858be195d7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575422 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/20cd63ce-b8cf-45fa-9d89-d917cff2894b-proxy-tls\") pod \"machine-config-controller-84d6567774-99fpp\" (UID: \"20cd63ce-b8cf-45fa-9d89-d917cff2894b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575437 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73d0a80a-e569-428a-b251-33f28e06fffd-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575451 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575467 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575490 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aea78078-eab1-4c82-b072-e6b65f959815-audit-policies\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575504 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jchq\" (UniqueName: \"kubernetes.io/projected/8944fd86-eca1-4882-896d-1cd3faa4b418-kube-api-access-6jchq\") pod \"package-server-manager-789f6589d5-hmjv6\" (UID: \"8944fd86-eca1-4882-896d-1cd3faa4b418\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575519 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65e79489-5e6b-421c-8019-b1d5161a0341-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ffw4d\" (UID: \"65e79489-5e6b-421c-8019-b1d5161a0341\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575555 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a52a543-c530-48d9-a046-ac4008df0477-audit-dir\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575570 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575587 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfb29a82-8be0-4219-81b1-fecfcb4e1061-metrics-certs\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575601 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4d9e5b29-c71c-4129-bd91-ccb81940c815-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tc4zf\" (UID: \"4d9e5b29-c71c-4129-bd91-ccb81940c815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575617 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-chjdb\" (UID: \"8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575634 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndvvf\" (UniqueName: \"kubernetes.io/projected/a9930c0b-af24-4e39-b8e6-199a40779aff-kube-api-access-ndvvf\") pod \"dns-default-hrfwj\" (UID: \"a9930c0b-af24-4e39-b8e6-199a40779aff\") " pod="openshift-dns/dns-default-hrfwj" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575651 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af4fa723-4cc5-4fa1-9162-fa20b958fa29-config\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575667 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9930c0b-af24-4e39-b8e6-199a40779aff-config-volume\") pod \"dns-default-hrfwj\" (UID: \"a9930c0b-af24-4e39-b8e6-199a40779aff\") " pod="openshift-dns/dns-default-hrfwj" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575692 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv7mz\" (UniqueName: \"kubernetes.io/projected/dd4cdd19-fcd5-4fa7-835b-f2c233746297-kube-api-access-cv7mz\") pod \"migrator-59844c95c7-vkfk8\" (UID: \"dd4cdd19-fcd5-4fa7-835b-f2c233746297\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vkfk8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575708 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575724 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c37a69a-9a13-400f-bfff-0886b6062725-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4x88q\" (UID: \"9c37a69a-9a13-400f-bfff-0886b6062725\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575738 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/130dae88-caa3-4e75-b3fc-d3b6dcd5b577-profile-collector-cert\") pod \"catalog-operator-68c6474976-mw25p\" (UID: \"130dae88-caa3-4e75-b3fc-d3b6dcd5b577\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575755 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a2068cc-b08f-467a-aaf9-a3bbfd99511d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-njw5m\" (UID: \"3a2068cc-b08f-467a-aaf9-a3bbfd99511d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575778 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cabe4321-bcb6-4d9e-905f-ab26bbc11b86-apiservice-cert\") pod \"packageserver-d55dfcdfc-nldk6\" (UID: \"cabe4321-bcb6-4d9e-905f-ab26bbc11b86\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575791 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4d9e5b29-c71c-4129-bd91-ccb81940c815-images\") pod \"machine-config-operator-74547568cd-tc4zf\" (UID: \"4d9e5b29-c71c-4129-bd91-ccb81940c815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575825 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrqzz\" (UniqueName: \"kubernetes.io/projected/9c37a69a-9a13-400f-bfff-0886b6062725-kube-api-access-xrqzz\") pod \"kube-storage-version-migrator-operator-b67b599dd-4x88q\" (UID: \"9c37a69a-9a13-400f-bfff-0886b6062725\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575842 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a50ebf9b-a11e-47ac-828c-f1858be195d7-metrics-tls\") pod \"ingress-operator-5b745b69d9-n94s8\" (UID: \"a50ebf9b-a11e-47ac-828c-f1858be195d7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575855 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a2068cc-b08f-467a-aaf9-a3bbfd99511d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-njw5m\" (UID: \"3a2068cc-b08f-467a-aaf9-a3bbfd99511d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575910 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aea78078-eab1-4c82-b072-e6b65f959815-audit-dir\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575926 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tqqt\" (UniqueName: \"kubernetes.io/projected/c79ee24d-6dc8-4c72-911a-9ea6810a9f9a-kube-api-access-4tqqt\") pod \"service-ca-9c57cc56f-zzk29\" (UID: \"c79ee24d-6dc8-4c72-911a-9ea6810a9f9a\") " pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575943 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575958 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkzz4\" (UniqueName: \"kubernetes.io/projected/ac8a7752-ba4b-41eb-a085-b493f6876beb-kube-api-access-kkzz4\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575975 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stgvv\" (UniqueName: \"kubernetes.io/projected/555517ab-ec2d-4534-8cc4-3ecbcdda7a1b-kube-api-access-stgvv\") pod \"openshift-controller-manager-operator-756b6f6bc6-zbt4w\" (UID: \"555517ab-ec2d-4534-8cc4-3ecbcdda7a1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575991 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-registry-tls\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576007 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/aea78078-eab1-4c82-b072-e6b65f959815-etcd-client\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576023 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576039 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/80a7c9c5-51fd-457c-a16b-c7ad90f92811-metrics-tls\") pod \"dns-operator-744455d44c-mp5g5\" (UID: \"80a7c9c5-51fd-457c-a16b-c7ad90f92811\") " pod="openshift-dns-operator/dns-operator-744455d44c-mp5g5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576057 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/af4fa723-4cc5-4fa1-9162-fa20b958fa29-etcd-client\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576078 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfb29a82-8be0-4219-81b1-fecfcb4e1061-service-ca-bundle\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576094 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7aae80f4-df3d-4545-8a9b-5a840e379b65-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xz8vz\" (UID: \"7aae80f4-df3d-4545-8a9b-5a840e379b65\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576111 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5d92565-846d-43a6-92e2-02351fec2f63-serving-cert\") pod \"service-ca-operator-777779d784-qthvh\" (UID: \"d5d92565-846d-43a6-92e2-02351fec2f63\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576140 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7aae80f4-df3d-4545-8a9b-5a840e379b65-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xz8vz\" (UID: \"7aae80f4-df3d-4545-8a9b-5a840e379b65\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576157 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576175 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c79ee24d-6dc8-4c72-911a-9ea6810a9f9a-signing-key\") pod \"service-ca-9c57cc56f-zzk29\" (UID: \"c79ee24d-6dc8-4c72-911a-9ea6810a9f9a\") " pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576189 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a2068cc-b08f-467a-aaf9-a3bbfd99511d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-njw5m\" (UID: \"3a2068cc-b08f-467a-aaf9-a3bbfd99511d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576207 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/555517ab-ec2d-4534-8cc4-3ecbcdda7a1b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zbt4w\" (UID: \"555517ab-ec2d-4534-8cc4-3ecbcdda7a1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576222 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7665\" (UniqueName: \"kubernetes.io/projected/5243a1a5-2eaa-4437-b10e-602439c7c838-kube-api-access-b7665\") pod \"marketplace-operator-79b997595-tr6kv\" (UID: \"5243a1a5-2eaa-4437-b10e-602439c7c838\") " pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576238 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/236e4954-0baf-4d9e-b36f-eed37707af26-certs\") pod \"machine-config-server-2hvtm\" (UID: \"236e4954-0baf-4d9e-b36f-eed37707af26\") " pod="openshift-machine-config-operator/machine-config-server-2hvtm" Jan 30 21:16:53 crc kubenswrapper[4751]: E0130 21:16:53.576372 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:54.076356422 +0000 UTC m=+152.822179071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.577820 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/af4fa723-4cc5-4fa1-9162-fa20b958fa29-etcd-service-ca\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.579546 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.580078 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/af4fa723-4cc5-4fa1-9162-fa20b958fa29-etcd-ca\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.580535 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-audit-policies\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.580855 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aea78078-eab1-4c82-b072-e6b65f959815-serving-cert\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.581091 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73d0a80a-e569-428a-b251-33f28e06fffd-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.585831 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/dfb29a82-8be0-4219-81b1-fecfcb4e1061-stats-auth\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.586394 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/542e69b1-7290-4693-b85b-5c9566314a51-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-p6hjc\" (UID: \"542e69b1-7290-4693-b85b-5c9566314a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.587741 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/542e69b1-7290-4693-b85b-5c9566314a51-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-p6hjc\" (UID: \"542e69b1-7290-4693-b85b-5c9566314a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.588025 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.588583 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/aea78078-eab1-4c82-b072-e6b65f959815-etcd-client\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.588816 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aea78078-eab1-4c82-b072-e6b65f959815-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.588890 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.589406 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c37a69a-9a13-400f-bfff-0886b6062725-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4x88q\" (UID: \"9c37a69a-9a13-400f-bfff-0886b6062725\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.589432 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c37a69a-9a13-400f-bfff-0886b6062725-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4x88q\" (UID: \"9c37a69a-9a13-400f-bfff-0886b6062725\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.589730 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfb29a82-8be0-4219-81b1-fecfcb4e1061-service-ca-bundle\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.591540 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73d0a80a-e569-428a-b251-33f28e06fffd-trusted-ca\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.591942 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aea78078-eab1-4c82-b072-e6b65f959815-audit-dir\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.592143 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/555517ab-ec2d-4534-8cc4-3ecbcdda7a1b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zbt4w\" (UID: \"555517ab-ec2d-4534-8cc4-3ecbcdda7a1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.592554 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dfb29a82-8be0-4219-81b1-fecfcb4e1061-default-certificate\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.594607 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a52a543-c530-48d9-a046-ac4008df0477-audit-dir\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.595433 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af4fa723-4cc5-4fa1-9162-fa20b958fa29-config\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.595812 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.598943 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfb29a82-8be0-4219-81b1-fecfcb4e1061-metrics-certs\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.599595 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aea78078-eab1-4c82-b072-e6b65f959815-audit-policies\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.599629 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.599848 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/aea78078-eab1-4c82-b072-e6b65f959815-encryption-config\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.600199 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af4fa723-4cc5-4fa1-9162-fa20b958fa29-serving-cert\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.601589 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/555517ab-ec2d-4534-8cc4-3ecbcdda7a1b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zbt4w\" (UID: \"555517ab-ec2d-4534-8cc4-3ecbcdda7a1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.601628 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73d0a80a-e569-428a-b251-33f28e06fffd-registry-certificates\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.601886 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/aea78078-eab1-4c82-b072-e6b65f959815-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.602129 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73d0a80a-e569-428a-b251-33f28e06fffd-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.603084 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/af4fa723-4cc5-4fa1-9162-fa20b958fa29-etcd-client\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.604014 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.604128 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.605029 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.605075 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.610517 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-bound-sa-token\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.612977 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.613036 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/80a7c9c5-51fd-457c-a16b-c7ad90f92811-metrics-tls\") pod \"dns-operator-744455d44c-mp5g5\" (UID: \"80a7c9c5-51fd-457c-a16b-c7ad90f92811\") " pod="openshift-dns-operator/dns-operator-744455d44c-mp5g5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.613065 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-registry-tls\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.615412 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.615861 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-chjdb\" (UID: \"8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.627005 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2nhc\" (UniqueName: \"kubernetes.io/projected/8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6-kube-api-access-n2nhc\") pod \"cluster-samples-operator-665b6dd947-chjdb\" (UID: \"8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.650784 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxthv\" (UniqueName: \"kubernetes.io/projected/aea78078-eab1-4c82-b072-e6b65f959815-kube-api-access-kxthv\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.667727 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkplm\" (UniqueName: \"kubernetes.io/projected/8a52a543-c530-48d9-a046-ac4008df0477-kube-api-access-qkplm\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.675038 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-plkp9"] Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.676855 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cabe4321-bcb6-4d9e-905f-ab26bbc11b86-tmpfs\") pod \"packageserver-d55dfcdfc-nldk6\" (UID: \"cabe4321-bcb6-4d9e-905f-ab26bbc11b86\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.676882 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxkk4\" (UniqueName: \"kubernetes.io/projected/8141131d-95f7-4103-bd2d-24630fc8e9b6-kube-api-access-wxkk4\") pod \"ingress-canary-m2zrs\" (UID: \"8141131d-95f7-4103-bd2d-24630fc8e9b6\") " pod="openshift-ingress-canary/ingress-canary-m2zrs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.676899 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c79ee24d-6dc8-4c72-911a-9ea6810a9f9a-signing-cabundle\") pod \"service-ca-9c57cc56f-zzk29\" (UID: \"c79ee24d-6dc8-4c72-911a-9ea6810a9f9a\") " pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.676913 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8141131d-95f7-4103-bd2d-24630fc8e9b6-cert\") pod \"ingress-canary-m2zrs\" (UID: \"8141131d-95f7-4103-bd2d-24630fc8e9b6\") " pod="openshift-ingress-canary/ingress-canary-m2zrs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.676928 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-socket-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.676944 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65e79489-5e6b-421c-8019-b1d5161a0341-config\") pod \"kube-controller-manager-operator-78b949d7b-ffw4d\" (UID: \"65e79489-5e6b-421c-8019-b1d5161a0341\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.676960 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4d9e5b29-c71c-4129-bd91-ccb81940c815-proxy-tls\") pod \"machine-config-operator-74547568cd-tc4zf\" (UID: \"4d9e5b29-c71c-4129-bd91-ccb81940c815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.676983 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aae80f4-df3d-4545-8a9b-5a840e379b65-config\") pod \"kube-apiserver-operator-766d6c64bb-xz8vz\" (UID: \"7aae80f4-df3d-4545-8a9b-5a840e379b65\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.676997 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-plugins-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677010 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6167bc7b-37d7-493c-93a9-dda69bedad76-srv-cert\") pod \"olm-operator-6b444d44fb-z2l88\" (UID: \"6167bc7b-37d7-493c-93a9-dda69bedad76\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677025 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a50ebf9b-a11e-47ac-828c-f1858be195d7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-n94s8\" (UID: \"a50ebf9b-a11e-47ac-828c-f1858be195d7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677040 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhbm8\" (UniqueName: \"kubernetes.io/projected/130dae88-caa3-4e75-b3fc-d3b6dcd5b577-kube-api-access-hhbm8\") pod \"catalog-operator-68c6474976-mw25p\" (UID: \"130dae88-caa3-4e75-b3fc-d3b6dcd5b577\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677054 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-csi-data-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677070 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a50ebf9b-a11e-47ac-828c-f1858be195d7-trusted-ca\") pod \"ingress-operator-5b745b69d9-n94s8\" (UID: \"a50ebf9b-a11e-47ac-828c-f1858be195d7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677084 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/236e4954-0baf-4d9e-b36f-eed37707af26-node-bootstrap-token\") pod \"machine-config-server-2hvtm\" (UID: \"236e4954-0baf-4d9e-b36f-eed37707af26\") " pod="openshift-machine-config-operator/machine-config-server-2hvtm" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677098 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6167bc7b-37d7-493c-93a9-dda69bedad76-profile-collector-cert\") pod \"olm-operator-6b444d44fb-z2l88\" (UID: \"6167bc7b-37d7-493c-93a9-dda69bedad76\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677119 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbjww\" (UniqueName: \"kubernetes.io/projected/236e4954-0baf-4d9e-b36f-eed37707af26-kube-api-access-lbjww\") pod \"machine-config-server-2hvtm\" (UID: \"236e4954-0baf-4d9e-b36f-eed37707af26\") " pod="openshift-machine-config-operator/machine-config-server-2hvtm" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677134 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-mountpoint-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677148 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8944fd86-eca1-4882-896d-1cd3faa4b418-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hmjv6\" (UID: \"8944fd86-eca1-4882-896d-1cd3faa4b418\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677164 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5243a1a5-2eaa-4437-b10e-602439c7c838-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tr6kv\" (UID: \"5243a1a5-2eaa-4437-b10e-602439c7c838\") " pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677180 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/20cd63ce-b8cf-45fa-9d89-d917cff2894b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-99fpp\" (UID: \"20cd63ce-b8cf-45fa-9d89-d917cff2894b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677196 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shckb\" (UniqueName: \"kubernetes.io/projected/c1f866c2-11a6-4c9b-8d42-54e5f0a18195-kube-api-access-shckb\") pod \"multus-admission-controller-857f4d67dd-l4lnd\" (UID: \"c1f866c2-11a6-4c9b-8d42-54e5f0a18195\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l4lnd" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677212 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/130dae88-caa3-4e75-b3fc-d3b6dcd5b577-srv-cert\") pod \"catalog-operator-68c6474976-mw25p\" (UID: \"130dae88-caa3-4e75-b3fc-d3b6dcd5b577\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677230 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc9ed63a-23a2-4b50-a290-0409ff14fd95-config-volume\") pod \"collect-profiles-29496795-lg25p\" (UID: \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677249 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fdq8\" (UniqueName: \"kubernetes.io/projected/4d9e5b29-c71c-4129-bd91-ccb81940c815-kube-api-access-5fdq8\") pod \"machine-config-operator-74547568cd-tc4zf\" (UID: \"4d9e5b29-c71c-4129-bd91-ccb81940c815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677267 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47wdx\" (UniqueName: \"kubernetes.io/projected/cc9ed63a-23a2-4b50-a290-0409ff14fd95-kube-api-access-47wdx\") pod \"collect-profiles-29496795-lg25p\" (UID: \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677283 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ttsm\" (UniqueName: \"kubernetes.io/projected/d5d92565-846d-43a6-92e2-02351fec2f63-kube-api-access-5ttsm\") pod \"service-ca-operator-777779d784-qthvh\" (UID: \"d5d92565-846d-43a6-92e2-02351fec2f63\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677302 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc9ed63a-23a2-4b50-a290-0409ff14fd95-secret-volume\") pod \"collect-profiles-29496795-lg25p\" (UID: \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677317 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5243a1a5-2eaa-4437-b10e-602439c7c838-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tr6kv\" (UID: \"5243a1a5-2eaa-4437-b10e-602439c7c838\") " pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677342 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cabe4321-bcb6-4d9e-905f-ab26bbc11b86-tmpfs\") pod \"packageserver-d55dfcdfc-nldk6\" (UID: \"cabe4321-bcb6-4d9e-905f-ab26bbc11b86\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677358 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a9930c0b-af24-4e39-b8e6-199a40779aff-metrics-tls\") pod \"dns-default-hrfwj\" (UID: \"a9930c0b-af24-4e39-b8e6-199a40779aff\") " pod="openshift-dns/dns-default-hrfwj" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677374 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmhw4\" (UniqueName: \"kubernetes.io/projected/357257a0-2b96-4833-84cb-1c4326c34e61-kube-api-access-pmhw4\") pod \"control-plane-machine-set-operator-78cbb6b69f-xf2m8\" (UID: \"357257a0-2b96-4833-84cb-1c4326c34e61\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677393 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65e79489-5e6b-421c-8019-b1d5161a0341-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ffw4d\" (UID: \"65e79489-5e6b-421c-8019-b1d5161a0341\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677407 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-792m2\" (UniqueName: \"kubernetes.io/projected/20cd63ce-b8cf-45fa-9d89-d917cff2894b-kube-api-access-792m2\") pod \"machine-config-controller-84d6567774-99fpp\" (UID: \"20cd63ce-b8cf-45fa-9d89-d917cff2894b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677429 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6cq6\" (UniqueName: \"kubernetes.io/projected/a50ebf9b-a11e-47ac-828c-f1858be195d7-kube-api-access-k6cq6\") pod \"ingress-operator-5b745b69d9-n94s8\" (UID: \"a50ebf9b-a11e-47ac-828c-f1858be195d7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677443 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/20cd63ce-b8cf-45fa-9d89-d917cff2894b-proxy-tls\") pod \"machine-config-controller-84d6567774-99fpp\" (UID: \"20cd63ce-b8cf-45fa-9d89-d917cff2894b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677460 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677477 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jchq\" (UniqueName: \"kubernetes.io/projected/8944fd86-eca1-4882-896d-1cd3faa4b418-kube-api-access-6jchq\") pod \"package-server-manager-789f6589d5-hmjv6\" (UID: \"8944fd86-eca1-4882-896d-1cd3faa4b418\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677492 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65e79489-5e6b-421c-8019-b1d5161a0341-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ffw4d\" (UID: \"65e79489-5e6b-421c-8019-b1d5161a0341\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677507 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4d9e5b29-c71c-4129-bd91-ccb81940c815-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tc4zf\" (UID: \"4d9e5b29-c71c-4129-bd91-ccb81940c815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677522 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndvvf\" (UniqueName: \"kubernetes.io/projected/a9930c0b-af24-4e39-b8e6-199a40779aff-kube-api-access-ndvvf\") pod \"dns-default-hrfwj\" (UID: \"a9930c0b-af24-4e39-b8e6-199a40779aff\") " pod="openshift-dns/dns-default-hrfwj" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677536 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9930c0b-af24-4e39-b8e6-199a40779aff-config-volume\") pod \"dns-default-hrfwj\" (UID: \"a9930c0b-af24-4e39-b8e6-199a40779aff\") " pod="openshift-dns/dns-default-hrfwj" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677552 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv7mz\" (UniqueName: \"kubernetes.io/projected/dd4cdd19-fcd5-4fa7-835b-f2c233746297-kube-api-access-cv7mz\") pod \"migrator-59844c95c7-vkfk8\" (UID: \"dd4cdd19-fcd5-4fa7-835b-f2c233746297\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vkfk8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677568 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a2068cc-b08f-467a-aaf9-a3bbfd99511d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-njw5m\" (UID: \"3a2068cc-b08f-467a-aaf9-a3bbfd99511d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677590 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/130dae88-caa3-4e75-b3fc-d3b6dcd5b577-profile-collector-cert\") pod \"catalog-operator-68c6474976-mw25p\" (UID: \"130dae88-caa3-4e75-b3fc-d3b6dcd5b577\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677603 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cabe4321-bcb6-4d9e-905f-ab26bbc11b86-apiservice-cert\") pod \"packageserver-d55dfcdfc-nldk6\" (UID: \"cabe4321-bcb6-4d9e-905f-ab26bbc11b86\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677617 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4d9e5b29-c71c-4129-bd91-ccb81940c815-images\") pod \"machine-config-operator-74547568cd-tc4zf\" (UID: \"4d9e5b29-c71c-4129-bd91-ccb81940c815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677633 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a2068cc-b08f-467a-aaf9-a3bbfd99511d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-njw5m\" (UID: \"3a2068cc-b08f-467a-aaf9-a3bbfd99511d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677653 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a50ebf9b-a11e-47ac-828c-f1858be195d7-metrics-tls\") pod \"ingress-operator-5b745b69d9-n94s8\" (UID: \"a50ebf9b-a11e-47ac-828c-f1858be195d7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677669 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tqqt\" (UniqueName: \"kubernetes.io/projected/c79ee24d-6dc8-4c72-911a-9ea6810a9f9a-kube-api-access-4tqqt\") pod \"service-ca-9c57cc56f-zzk29\" (UID: \"c79ee24d-6dc8-4c72-911a-9ea6810a9f9a\") " pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677684 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkzz4\" (UniqueName: \"kubernetes.io/projected/ac8a7752-ba4b-41eb-a085-b493f6876beb-kube-api-access-kkzz4\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677704 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7aae80f4-df3d-4545-8a9b-5a840e379b65-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xz8vz\" (UID: \"7aae80f4-df3d-4545-8a9b-5a840e379b65\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677720 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5d92565-846d-43a6-92e2-02351fec2f63-serving-cert\") pod \"service-ca-operator-777779d784-qthvh\" (UID: \"d5d92565-846d-43a6-92e2-02351fec2f63\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677734 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7aae80f4-df3d-4545-8a9b-5a840e379b65-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xz8vz\" (UID: \"7aae80f4-df3d-4545-8a9b-5a840e379b65\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677751 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c79ee24d-6dc8-4c72-911a-9ea6810a9f9a-signing-key\") pod \"service-ca-9c57cc56f-zzk29\" (UID: \"c79ee24d-6dc8-4c72-911a-9ea6810a9f9a\") " pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677765 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a2068cc-b08f-467a-aaf9-a3bbfd99511d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-njw5m\" (UID: \"3a2068cc-b08f-467a-aaf9-a3bbfd99511d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677779 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7665\" (UniqueName: \"kubernetes.io/projected/5243a1a5-2eaa-4437-b10e-602439c7c838-kube-api-access-b7665\") pod \"marketplace-operator-79b997595-tr6kv\" (UID: \"5243a1a5-2eaa-4437-b10e-602439c7c838\") " pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677793 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/236e4954-0baf-4d9e-b36f-eed37707af26-certs\") pod \"machine-config-server-2hvtm\" (UID: \"236e4954-0baf-4d9e-b36f-eed37707af26\") " pod="openshift-machine-config-operator/machine-config-server-2hvtm" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677807 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c1f866c2-11a6-4c9b-8d42-54e5f0a18195-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-l4lnd\" (UID: \"c1f866c2-11a6-4c9b-8d42-54e5f0a18195\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l4lnd" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677821 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj26f\" (UniqueName: \"kubernetes.io/projected/cabe4321-bcb6-4d9e-905f-ab26bbc11b86-kube-api-access-xj26f\") pod \"packageserver-d55dfcdfc-nldk6\" (UID: \"cabe4321-bcb6-4d9e-905f-ab26bbc11b86\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677835 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5d92565-846d-43a6-92e2-02351fec2f63-config\") pod \"service-ca-operator-777779d784-qthvh\" (UID: \"d5d92565-846d-43a6-92e2-02351fec2f63\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677852 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/357257a0-2b96-4833-84cb-1c4326c34e61-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xf2m8\" (UID: \"357257a0-2b96-4833-84cb-1c4326c34e61\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677866 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cabe4321-bcb6-4d9e-905f-ab26bbc11b86-webhook-cert\") pod \"packageserver-d55dfcdfc-nldk6\" (UID: \"cabe4321-bcb6-4d9e-905f-ab26bbc11b86\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677881 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-registration-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677898 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clhl7\" (UniqueName: \"kubernetes.io/projected/6167bc7b-37d7-493c-93a9-dda69bedad76-kube-api-access-clhl7\") pod \"olm-operator-6b444d44fb-z2l88\" (UID: \"6167bc7b-37d7-493c-93a9-dda69bedad76\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.678310 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc9ed63a-23a2-4b50-a290-0409ff14fd95-config-volume\") pod \"collect-profiles-29496795-lg25p\" (UID: \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.678424 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-mountpoint-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.679477 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/20cd63ce-b8cf-45fa-9d89-d917cff2894b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-99fpp\" (UID: \"20cd63ce-b8cf-45fa-9d89-d917cff2894b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.679711 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a50ebf9b-a11e-47ac-828c-f1858be195d7-trusted-ca\") pod \"ingress-operator-5b745b69d9-n94s8\" (UID: \"a50ebf9b-a11e-47ac-828c-f1858be195d7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.679780 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-csi-data-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: E0130 21:16:53.680029 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:54.18001934 +0000 UTC m=+152.925841989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.680652 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5243a1a5-2eaa-4437-b10e-602439c7c838-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tr6kv\" (UID: \"5243a1a5-2eaa-4437-b10e-602439c7c838\") " pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.680667 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-plugins-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.680939 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc9ed63a-23a2-4b50-a290-0409ff14fd95-secret-volume\") pod \"collect-profiles-29496795-lg25p\" (UID: \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.681309 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-socket-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.681535 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65e79489-5e6b-421c-8019-b1d5161a0341-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ffw4d\" (UID: \"65e79489-5e6b-421c-8019-b1d5161a0341\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.681606 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4d9e5b29-c71c-4129-bd91-ccb81940c815-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tc4zf\" (UID: \"4d9e5b29-c71c-4129-bd91-ccb81940c815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.682229 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9930c0b-af24-4e39-b8e6-199a40779aff-config-volume\") pod \"dns-default-hrfwj\" (UID: \"a9930c0b-af24-4e39-b8e6-199a40779aff\") " pod="openshift-dns/dns-default-hrfwj" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.683243 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5d92565-846d-43a6-92e2-02351fec2f63-config\") pod \"service-ca-operator-777779d784-qthvh\" (UID: \"d5d92565-846d-43a6-92e2-02351fec2f63\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.683256 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6167bc7b-37d7-493c-93a9-dda69bedad76-srv-cert\") pod \"olm-operator-6b444d44fb-z2l88\" (UID: \"6167bc7b-37d7-493c-93a9-dda69bedad76\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.683485 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65e79489-5e6b-421c-8019-b1d5161a0341-config\") pod \"kube-controller-manager-operator-78b949d7b-ffw4d\" (UID: \"65e79489-5e6b-421c-8019-b1d5161a0341\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.683750 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4d9e5b29-c71c-4129-bd91-ccb81940c815-images\") pod \"machine-config-operator-74547568cd-tc4zf\" (UID: \"4d9e5b29-c71c-4129-bd91-ccb81940c815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.683792 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a9930c0b-af24-4e39-b8e6-199a40779aff-metrics-tls\") pod \"dns-default-hrfwj\" (UID: \"a9930c0b-af24-4e39-b8e6-199a40779aff\") " pod="openshift-dns/dns-default-hrfwj" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.684861 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c79ee24d-6dc8-4c72-911a-9ea6810a9f9a-signing-cabundle\") pod \"service-ca-9c57cc56f-zzk29\" (UID: \"c79ee24d-6dc8-4c72-911a-9ea6810a9f9a\") " pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.684933 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8944fd86-eca1-4882-896d-1cd3faa4b418-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hmjv6\" (UID: \"8944fd86-eca1-4882-896d-1cd3faa4b418\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.686784 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/236e4954-0baf-4d9e-b36f-eed37707af26-node-bootstrap-token\") pod \"machine-config-server-2hvtm\" (UID: \"236e4954-0baf-4d9e-b36f-eed37707af26\") " pod="openshift-machine-config-operator/machine-config-server-2hvtm" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.687349 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5243a1a5-2eaa-4437-b10e-602439c7c838-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tr6kv\" (UID: \"5243a1a5-2eaa-4437-b10e-602439c7c838\") " pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.687729 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a2068cc-b08f-467a-aaf9-a3bbfd99511d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-njw5m\" (UID: \"3a2068cc-b08f-467a-aaf9-a3bbfd99511d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.688118 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aae80f4-df3d-4545-8a9b-5a840e379b65-config\") pod \"kube-apiserver-operator-766d6c64bb-xz8vz\" (UID: \"7aae80f4-df3d-4545-8a9b-5a840e379b65\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.688148 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-registration-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.688560 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/236e4954-0baf-4d9e-b36f-eed37707af26-certs\") pod \"machine-config-server-2hvtm\" (UID: \"236e4954-0baf-4d9e-b36f-eed37707af26\") " pod="openshift-machine-config-operator/machine-config-server-2hvtm" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.689057 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a2068cc-b08f-467a-aaf9-a3bbfd99511d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-njw5m\" (UID: \"3a2068cc-b08f-467a-aaf9-a3bbfd99511d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.689101 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.689620 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8141131d-95f7-4103-bd2d-24630fc8e9b6-cert\") pod \"ingress-canary-m2zrs\" (UID: \"8141131d-95f7-4103-bd2d-24630fc8e9b6\") " pod="openshift-ingress-canary/ingress-canary-m2zrs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.690483 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/20cd63ce-b8cf-45fa-9d89-d917cff2894b-proxy-tls\") pod \"machine-config-controller-84d6567774-99fpp\" (UID: \"20cd63ce-b8cf-45fa-9d89-d917cff2894b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.690488 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/130dae88-caa3-4e75-b3fc-d3b6dcd5b577-profile-collector-cert\") pod \"catalog-operator-68c6474976-mw25p\" (UID: \"130dae88-caa3-4e75-b3fc-d3b6dcd5b577\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.690578 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cabe4321-bcb6-4d9e-905f-ab26bbc11b86-webhook-cert\") pod \"packageserver-d55dfcdfc-nldk6\" (UID: \"cabe4321-bcb6-4d9e-905f-ab26bbc11b86\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.690925 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a50ebf9b-a11e-47ac-828c-f1858be195d7-metrics-tls\") pod \"ingress-operator-5b745b69d9-n94s8\" (UID: \"a50ebf9b-a11e-47ac-828c-f1858be195d7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.691061 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cabe4321-bcb6-4d9e-905f-ab26bbc11b86-apiservice-cert\") pod \"packageserver-d55dfcdfc-nldk6\" (UID: \"cabe4321-bcb6-4d9e-905f-ab26bbc11b86\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.691265 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4d9e5b29-c71c-4129-bd91-ccb81940c815-proxy-tls\") pod \"machine-config-operator-74547568cd-tc4zf\" (UID: \"4d9e5b29-c71c-4129-bd91-ccb81940c815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.692951 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5d92565-846d-43a6-92e2-02351fec2f63-serving-cert\") pod \"service-ca-operator-777779d784-qthvh\" (UID: \"d5d92565-846d-43a6-92e2-02351fec2f63\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.693084 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c1f866c2-11a6-4c9b-8d42-54e5f0a18195-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-l4lnd\" (UID: \"c1f866c2-11a6-4c9b-8d42-54e5f0a18195\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l4lnd" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.693198 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7aae80f4-df3d-4545-8a9b-5a840e379b65-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xz8vz\" (UID: \"7aae80f4-df3d-4545-8a9b-5a840e379b65\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.693203 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/357257a0-2b96-4833-84cb-1c4326c34e61-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xf2m8\" (UID: \"357257a0-2b96-4833-84cb-1c4326c34e61\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.693729 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c79ee24d-6dc8-4c72-911a-9ea6810a9f9a-signing-key\") pod \"service-ca-9c57cc56f-zzk29\" (UID: \"c79ee24d-6dc8-4c72-911a-9ea6810a9f9a\") " pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.699785 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/130dae88-caa3-4e75-b3fc-d3b6dcd5b577-srv-cert\") pod \"catalog-operator-68c6474976-mw25p\" (UID: \"130dae88-caa3-4e75-b3fc-d3b6dcd5b577\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.700192 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6167bc7b-37d7-493c-93a9-dda69bedad76-profile-collector-cert\") pod \"olm-operator-6b444d44fb-z2l88\" (UID: \"6167bc7b-37d7-493c-93a9-dda69bedad76\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.715866 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/542e69b1-7290-4693-b85b-5c9566314a51-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-p6hjc\" (UID: \"542e69b1-7290-4693-b85b-5c9566314a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.716815 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-8l2v5"] Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.722813 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.727611 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdfzb\" (UniqueName: \"kubernetes.io/projected/af4fa723-4cc5-4fa1-9162-fa20b958fa29-kube-api-access-hdfzb\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.735865 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8jsqt"] Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.736097 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.736948 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-nk5rn"] Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.747806 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxbcq\" (UniqueName: \"kubernetes.io/projected/dfb29a82-8be0-4219-81b1-fecfcb4e1061-kube-api-access-dxbcq\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.752080 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6gckm"] Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.771259 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stgvv\" (UniqueName: \"kubernetes.io/projected/555517ab-ec2d-4534-8cc4-3ecbcdda7a1b-kube-api-access-stgvv\") pod \"openshift-controller-manager-operator-756b6f6bc6-zbt4w\" (UID: \"555517ab-ec2d-4534-8cc4-3ecbcdda7a1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.779204 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:53 crc kubenswrapper[4751]: E0130 21:16:53.779379 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:54.279352827 +0000 UTC m=+153.025175486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.779551 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: E0130 21:16:53.779848 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:54.27983579 +0000 UTC m=+153.025658459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.789695 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cp5d\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-kube-api-access-2cp5d\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.803189 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" event={"ID":"ebb4c857-4f54-440f-81d7-74eadc588099","Type":"ContainerStarted","Data":"a2b4482621e2d1c6e88455c5d82d8284f2be0785d2d69e48d1c8409293eefb8c"} Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.807131 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w"] Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.810150 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp"] Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.810588 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.814674 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-plkp9" event={"ID":"6d872f03-d4d0-49bc-9758-05060035dafa","Type":"ContainerStarted","Data":"6b42ab36cab01aee98a06d95fe6c40dede3332d8845f2cd0513f8746b8ab4a01"} Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.816268 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9fpj\" (UniqueName: \"kubernetes.io/projected/80a7c9c5-51fd-457c-a16b-c7ad90f92811-kube-api-access-s9fpj\") pod \"dns-operator-744455d44c-mp5g5\" (UID: \"80a7c9c5-51fd-457c-a16b-c7ad90f92811\") " pod="openshift-dns-operator/dns-operator-744455d44c-mp5g5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.821711 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" event={"ID":"7b579d29-157b-4ff2-b623-d4af8fd6a8fe","Type":"ContainerStarted","Data":"d55e421aea0b26254543c3385167bdd9b2b3bddbed4b1e081fd8b94454d8f350"} Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.821748 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" event={"ID":"7b579d29-157b-4ff2-b623-d4af8fd6a8fe","Type":"ContainerStarted","Data":"f6f99673aa6bb0aad2f3c4036b7f5f6b83610d6ebe270ac0c27a7c0d5e533722"} Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.825093 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mp5g5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.825436 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8l2v5" event={"ID":"21dc9dc0-702d-49a7-baed-f8e70f6867f3","Type":"ContainerStarted","Data":"5a76b203cc9771016d91d3bc5cdeac6be80b178bf079cbd548e925d6b80782e8"} Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.825723 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.828091 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" event={"ID":"61e09136-e0d4-4c75-ad01-543778867411","Type":"ContainerStarted","Data":"bcc3e35fa7bf77d352470a19ce3b00e0ae26473ecc7d562f4aa3b014710b8b83"} Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.829508 4751 generic.go:334] "Generic (PLEG): container finished" podID="80b9c760-3c34-42cb-bb23-1f11dad50e58" containerID="ecb17adc0b077bbd204ae1ae355d34b0117514749487f0933b8b0674bb22cc23" exitCode=0 Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.829730 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" event={"ID":"80b9c760-3c34-42cb-bb23-1f11dad50e58","Type":"ContainerDied","Data":"ecb17adc0b077bbd204ae1ae355d34b0117514749487f0933b8b0674bb22cc23"} Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.829765 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" event={"ID":"80b9c760-3c34-42cb-bb23-1f11dad50e58","Type":"ContainerStarted","Data":"5d290ac6257102448a66da1338c6ff7602f0050bddbafd5f5dfec0384a0c4312"} Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.829829 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfn6c\" (UniqueName: \"kubernetes.io/projected/542e69b1-7290-4693-b85b-5c9566314a51-kube-api-access-gfn6c\") pod \"cluster-image-registry-operator-dc59b4c8b-p6hjc\" (UID: \"542e69b1-7290-4693-b85b-5c9566314a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.830970 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6gckm" event={"ID":"b4ed27b8-56e7-4e93-aea6-83adae8affb6","Type":"ContainerStarted","Data":"05f6960d4109556e39ddd40a564de7a622a0e36a81ed53a0885707eebe6cf349"} Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.843756 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.853753 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9"] Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.856175 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrqzz\" (UniqueName: \"kubernetes.io/projected/9c37a69a-9a13-400f-bfff-0886b6062725-kube-api-access-xrqzz\") pod \"kube-storage-version-migrator-operator-b67b599dd-4x88q\" (UID: \"9c37a69a-9a13-400f-bfff-0886b6062725\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.856299 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" Jan 30 21:16:53 crc kubenswrapper[4751]: W0130 21:16:53.864179 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod322809f5_4f4c_487e_8488_6c62bac86f8f.slice/crio-f7e0d553caf37cf1c65a97cae3829801333e8d0eb24ba3398a66bb00e08506f3 WatchSource:0}: Error finding container f7e0d553caf37cf1c65a97cae3829801333e8d0eb24ba3398a66bb00e08506f3: Status 404 returned error can't find the container with id f7e0d553caf37cf1c65a97cae3829801333e8d0eb24ba3398a66bb00e08506f3 Jan 30 21:16:53 crc kubenswrapper[4751]: W0130 21:16:53.864467 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3639569_5d39_4fa1_863c_45307b3da476.slice/crio-6c52c1884413cb9641c5954d9df153e6466c3979591a5be6ee9c86633ba98a31 WatchSource:0}: Error finding container 6c52c1884413cb9641c5954d9df153e6466c3979591a5be6ee9c86633ba98a31: Status 404 returned error can't find the container with id 6c52c1884413cb9641c5954d9df153e6466c3979591a5be6ee9c86633ba98a31 Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.872051 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxkk4\" (UniqueName: \"kubernetes.io/projected/8141131d-95f7-4103-bd2d-24630fc8e9b6-kube-api-access-wxkk4\") pod \"ingress-canary-m2zrs\" (UID: \"8141131d-95f7-4103-bd2d-24630fc8e9b6\") " pod="openshift-ingress-canary/ingress-canary-m2zrs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.872579 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-7bw65"] Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.873568 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9wvms"] Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.888661 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:53 crc kubenswrapper[4751]: E0130 21:16:53.888816 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:54.388791961 +0000 UTC m=+153.134614610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.889013 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: E0130 21:16:53.889640 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:54.389630293 +0000 UTC m=+153.135452942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.893564 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clhl7\" (UniqueName: \"kubernetes.io/projected/6167bc7b-37d7-493c-93a9-dda69bedad76-kube-api-access-clhl7\") pod \"olm-operator-6b444d44fb-z2l88\" (UID: \"6167bc7b-37d7-493c-93a9-dda69bedad76\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.907444 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-792m2\" (UniqueName: \"kubernetes.io/projected/20cd63ce-b8cf-45fa-9d89-d917cff2894b-kube-api-access-792m2\") pod \"machine-config-controller-84d6567774-99fpp\" (UID: \"20cd63ce-b8cf-45fa-9d89-d917cff2894b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.930933 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6cq6\" (UniqueName: \"kubernetes.io/projected/a50ebf9b-a11e-47ac-828c-f1858be195d7-kube-api-access-k6cq6\") pod \"ingress-operator-5b745b69d9-n94s8\" (UID: \"a50ebf9b-a11e-47ac-828c-f1858be195d7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.948613 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbjww\" (UniqueName: \"kubernetes.io/projected/236e4954-0baf-4d9e-b36f-eed37707af26-kube-api-access-lbjww\") pod \"machine-config-server-2hvtm\" (UID: \"236e4954-0baf-4d9e-b36f-eed37707af26\") " pod="openshift-machine-config-operator/machine-config-server-2hvtm" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.961972 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.971241 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-m2zrs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.982614 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhbm8\" (UniqueName: \"kubernetes.io/projected/130dae88-caa3-4e75-b3fc-d3b6dcd5b577-kube-api-access-hhbm8\") pod \"catalog-operator-68c6474976-mw25p\" (UID: \"130dae88-caa3-4e75-b3fc-d3b6dcd5b577\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.983910 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2hvtm" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.989813 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.990478 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tqqt\" (UniqueName: \"kubernetes.io/projected/c79ee24d-6dc8-4c72-911a-9ea6810a9f9a-kube-api-access-4tqqt\") pod \"service-ca-9c57cc56f-zzk29\" (UID: \"c79ee24d-6dc8-4c72-911a-9ea6810a9f9a\") " pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" Jan 30 21:16:53 crc kubenswrapper[4751]: E0130 21:16:53.990566 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:54.490553761 +0000 UTC m=+153.236376410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.999183 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6dcxn"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.009713 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jchq\" (UniqueName: \"kubernetes.io/projected/8944fd86-eca1-4882-896d-1cd3faa4b418-kube-api-access-6jchq\") pod \"package-server-manager-789f6589d5-hmjv6\" (UID: \"8944fd86-eca1-4882-896d-1cd3faa4b418\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.039512 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65e79489-5e6b-421c-8019-b1d5161a0341-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ffw4d\" (UID: \"65e79489-5e6b-421c-8019-b1d5161a0341\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.044478 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.054017 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a50ebf9b-a11e-47ac-828c-f1858be195d7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-n94s8\" (UID: \"a50ebf9b-a11e-47ac-828c-f1858be195d7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.069241 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fdq8\" (UniqueName: \"kubernetes.io/projected/4d9e5b29-c71c-4129-bd91-ccb81940c815-kube-api-access-5fdq8\") pod \"machine-config-operator-74547568cd-tc4zf\" (UID: \"4d9e5b29-c71c-4129-bd91-ccb81940c815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.093217 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47wdx\" (UniqueName: \"kubernetes.io/projected/cc9ed63a-23a2-4b50-a290-0409ff14fd95-kube-api-access-47wdx\") pod \"collect-profiles-29496795-lg25p\" (UID: \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.094070 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.094581 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:54 crc kubenswrapper[4751]: E0130 21:16:54.094895 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:54.594884565 +0000 UTC m=+153.340707214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.116119 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ttsm\" (UniqueName: \"kubernetes.io/projected/d5d92565-846d-43a6-92e2-02351fec2f63-kube-api-access-5ttsm\") pod \"service-ca-operator-777779d784-qthvh\" (UID: \"d5d92565-846d-43a6-92e2-02351fec2f63\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.128605 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.128639 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.132708 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndvvf\" (UniqueName: \"kubernetes.io/projected/a9930c0b-af24-4e39-b8e6-199a40779aff-kube-api-access-ndvvf\") pod \"dns-default-hrfwj\" (UID: \"a9930c0b-af24-4e39-b8e6-199a40779aff\") " pod="openshift-dns/dns-default-hrfwj" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.139720 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.152513 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmhw4\" (UniqueName: \"kubernetes.io/projected/357257a0-2b96-4833-84cb-1c4326c34e61-kube-api-access-pmhw4\") pod \"control-plane-machine-set-operator-78cbb6b69f-xf2m8\" (UID: \"357257a0-2b96-4833-84cb-1c4326c34e61\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.169986 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" Jan 30 21:16:54 crc kubenswrapper[4751]: W0130 21:16:54.173735 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod236e4954_0baf_4d9e_b36f_eed37707af26.slice/crio-6d35de23ffaea959ae81b921d830d837f52c056848673f50ac0e8d34bbebe493 WatchSource:0}: Error finding container 6d35de23ffaea959ae81b921d830d837f52c056848673f50ac0e8d34bbebe493: Status 404 returned error can't find the container with id 6d35de23ffaea959ae81b921d830d837f52c056848673f50ac0e8d34bbebe493 Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.180969 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7665\" (UniqueName: \"kubernetes.io/projected/5243a1a5-2eaa-4437-b10e-602439c7c838-kube-api-access-b7665\") pod \"marketplace-operator-79b997595-tr6kv\" (UID: \"5243a1a5-2eaa-4437-b10e-602439c7c838\") " pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.182339 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.190766 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.197076 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkzz4\" (UniqueName: \"kubernetes.io/projected/ac8a7752-ba4b-41eb-a085-b493f6876beb-kube-api-access-kkzz4\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.197247 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.197583 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:16:54 crc kubenswrapper[4751]: E0130 21:16:54.198169 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:54.698149093 +0000 UTC m=+153.443971742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.207035 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.217066 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7aae80f4-df3d-4545-8a9b-5a840e379b65-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xz8vz\" (UID: \"7aae80f4-df3d-4545-8a9b-5a840e379b65\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.217245 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.222834 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.232552 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.242649 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shckb\" (UniqueName: \"kubernetes.io/projected/c1f866c2-11a6-4c9b-8d42-54e5f0a18195-kube-api-access-shckb\") pod \"multus-admission-controller-857f4d67dd-l4lnd\" (UID: \"c1f866c2-11a6-4c9b-8d42-54e5f0a18195\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l4lnd" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.246801 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.256627 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.258377 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv7mz\" (UniqueName: \"kubernetes.io/projected/dd4cdd19-fcd5-4fa7-835b-f2c233746297-kube-api-access-cv7mz\") pod \"migrator-59844c95c7-vkfk8\" (UID: \"dd4cdd19-fcd5-4fa7-835b-f2c233746297\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vkfk8" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.268830 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mp5g5"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.287046 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hrfwj" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.294293 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a2068cc-b08f-467a-aaf9-a3bbfd99511d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-njw5m\" (UID: \"3a2068cc-b08f-467a-aaf9-a3bbfd99511d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.298423 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.299234 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:54 crc kubenswrapper[4751]: E0130 21:16:54.299562 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:54.799551633 +0000 UTC m=+153.545374282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.302989 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zpvhg"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.318113 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj26f\" (UniqueName: \"kubernetes.io/projected/cabe4321-bcb6-4d9e-905f-ab26bbc11b86-kube-api-access-xj26f\") pod \"packageserver-d55dfcdfc-nldk6\" (UID: \"cabe4321-bcb6-4d9e-905f-ab26bbc11b86\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.325360 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.399860 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:54 crc kubenswrapper[4751]: E0130 21:16:54.400304 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:54.900283075 +0000 UTC m=+153.646105724 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.405799 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.433005 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vkfk8" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.450107 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.463733 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.477586 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-l4lnd" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.502416 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:54 crc kubenswrapper[4751]: E0130 21:16:54.502877 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:55.002867266 +0000 UTC m=+153.748689915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.537756 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.554523 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-m2zrs"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.554885 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.603608 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:54 crc kubenswrapper[4751]: E0130 21:16:54.603869 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:55.103855125 +0000 UTC m=+153.849677774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.621465 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.704464 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:54 crc kubenswrapper[4751]: E0130 21:16:54.706960 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:55.206922407 +0000 UTC m=+153.952745056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.782729 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.782776 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.791180 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.806071 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:54 crc kubenswrapper[4751]: E0130 21:16:54.806219 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:55.306198853 +0000 UTC m=+154.052021502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.806309 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:54 crc kubenswrapper[4751]: E0130 21:16:54.806602 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:55.306594903 +0000 UTC m=+154.052417552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.863563 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" event={"ID":"5c9671c2-84f9-4719-b497-4fa77803105b","Type":"ContainerStarted","Data":"22d4e0441404579c411a624d8d750283c4e57965f2a96bf423b811ee07efd8db"} Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.863615 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" event={"ID":"5c9671c2-84f9-4719-b497-4fa77803105b","Type":"ContainerStarted","Data":"9a7aae1f22c94a94fa3580fe84ad185240e004851381ce969903a5a4d6e1f1b2"} Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.873834 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" event={"ID":"322809f5-4f4c-487e-8488-6c62bac86f8f","Type":"ContainerStarted","Data":"f941baa5ba0d3e32dd492e4e84997435c31d8bd5216b7dacf8d6df3060c1827b"} Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.874059 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" event={"ID":"322809f5-4f4c-487e-8488-6c62bac86f8f","Type":"ContainerStarted","Data":"f7e0d553caf37cf1c65a97cae3829801333e8d0eb24ba3398a66bb00e08506f3"} Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.874075 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.885485 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" event={"ID":"61e09136-e0d4-4c75-ad01-543778867411","Type":"ContainerStarted","Data":"4df18ee24c522527074d638e05b39d9ec896a8a13159255abd65c9142157efc3"} Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.885877 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.899838 4751 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-8z9vp container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.899900 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" podUID="322809f5-4f4c-487e-8488-6c62bac86f8f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.903887 4751 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-8jsqt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.903919 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" podUID="61e09136-e0d4-4c75-ad01-543778867411" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.909009 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:54 crc kubenswrapper[4751]: E0130 21:16:54.909401 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:55.409382999 +0000 UTC m=+154.155205648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.910126 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.931496 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.945172 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.949447 4751 csr.go:261] certificate signing request csr-bgs9s is approved, waiting to be issued Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.953055 4751 csr.go:257] certificate signing request csr-bgs9s is issued Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.991023 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" event={"ID":"65e79489-5e6b-421c-8019-b1d5161a0341","Type":"ContainerStarted","Data":"d864ac0bcc076ec8c9685884875686a30eb7c70458a350fc5884fec9ca43f99f"} Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.995576 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2hvtm" event={"ID":"236e4954-0baf-4d9e-b36f-eed37707af26","Type":"ContainerStarted","Data":"6d35de23ffaea959ae81b921d830d837f52c056848673f50ac0e8d34bbebe493"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.010879 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:55 crc kubenswrapper[4751]: E0130 21:16:55.015636 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:55.515619072 +0000 UTC m=+154.261441721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.016986 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" event={"ID":"f3639569-5d39-4fa1-863c-45307b3da476","Type":"ContainerStarted","Data":"79a21c75c107c4f2158bf7e172ea407be9cd48b91f6ed6fa8d70cbccfdcb656a"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.017027 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" event={"ID":"f3639569-5d39-4fa1-863c-45307b3da476","Type":"ContainerStarted","Data":"6c52c1884413cb9641c5954d9df153e6466c3979591a5be6ee9c86633ba98a31"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.029772 4751 generic.go:334] "Generic (PLEG): container finished" podID="aea78078-eab1-4c82-b072-e6b65f959815" containerID="b94b03a7192fe7d8804d522deb3ec025698483a8fc2ef870057c74c3995d16f8" exitCode=0 Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.029885 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" event={"ID":"aea78078-eab1-4c82-b072-e6b65f959815","Type":"ContainerDied","Data":"b94b03a7192fe7d8804d522deb3ec025698483a8fc2ef870057c74c3995d16f8"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.029911 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" event={"ID":"aea78078-eab1-4c82-b072-e6b65f959815","Type":"ContainerStarted","Data":"a44527cbe58f7fee1ceaf223fce522274d1ff8197428c02cc745a0767403f1ec"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.050824 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb" event={"ID":"8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6","Type":"ContainerStarted","Data":"c9b287da83607151b9420b2586281c9669ffeccabc1f7b5fd6fc311facbd6eb9"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.061277 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" event={"ID":"9c37a69a-9a13-400f-bfff-0886b6062725","Type":"ContainerStarted","Data":"9a7afff7044e5e85896ce64d7793e274ce92f64d4888f911c586ba802b52ab29"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.066487 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" event={"ID":"80b9c760-3c34-42cb-bb23-1f11dad50e58","Type":"ContainerStarted","Data":"1a7879abe96eabbeb776b4634c7ae0e1936702b25f911824e0f364bfc112d5dc"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.066905 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.068707 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6gckm" event={"ID":"b4ed27b8-56e7-4e93-aea6-83adae8affb6","Type":"ContainerStarted","Data":"aca02e92ad644ec3adac0a27ce5b841b690fa8614def927cae46ab70d4f6b7cb"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.069111 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.074108 4751 generic.go:334] "Generic (PLEG): container finished" podID="6d872f03-d4d0-49bc-9758-05060035dafa" containerID="4e83864f2464e79ff40daf58aed55d583ac6fc82aa4375e098a27d4341cf6206" exitCode=0 Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.074467 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-plkp9" event={"ID":"6d872f03-d4d0-49bc-9758-05060035dafa","Type":"ContainerDied","Data":"4e83864f2464e79ff40daf58aed55d583ac6fc82aa4375e098a27d4341cf6206"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.073836 4751 patch_prober.go:28] interesting pod/console-operator-58897d9998-6gckm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.075863 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6gckm" podUID="b4ed27b8-56e7-4e93-aea6-83adae8affb6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.080084 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7bw65" event={"ID":"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df","Type":"ContainerStarted","Data":"843b3765407d83795c144aa8145fd74758bb93076619ea9f3b60eea53f7c6c68"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.080114 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7bw65" event={"ID":"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df","Type":"ContainerStarted","Data":"d63914e011c114b25558640a8b61cb4256ca45025b1be36724b2e0af5265302e"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.084640 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" event={"ID":"af4fa723-4cc5-4fa1-9162-fa20b958fa29","Type":"ContainerStarted","Data":"f9b089997389b30f379a8257f5286fe8d62441d79ee23bede67069d166514437"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.086468 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mp5g5" event={"ID":"80a7c9c5-51fd-457c-a16b-c7ad90f92811","Type":"ContainerStarted","Data":"5c6ff82948286b3cc8f625bf9d46258005421cb15eed0158df2a550a684ab697"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.088098 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8l2v5" event={"ID":"21dc9dc0-702d-49a7-baed-f8e70f6867f3","Type":"ContainerStarted","Data":"8dc25416ce0431f51bfd20b5e06b1682347e5539e22a7b4fc7e753265b6fc033"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.088531 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-8l2v5" Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.096415 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-w42cs" event={"ID":"dfb29a82-8be0-4219-81b1-fecfcb4e1061","Type":"ContainerStarted","Data":"f6ac8436f1fdf1f1406416501a90fbf4198d690b7d48a5884450e4eb3ebfdac1"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.096458 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-w42cs" event={"ID":"dfb29a82-8be0-4219-81b1-fecfcb4e1061","Type":"ContainerStarted","Data":"a6144931aee6b55595d67ead250bebeb7cd485c34dfa13319162d68a8ced2c29"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.096808 4751 patch_prober.go:28] interesting pod/downloads-7954f5f757-8l2v5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.096850 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8l2v5" podUID="21dc9dc0-702d-49a7-baed-f8e70f6867f3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.111387 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:55 crc kubenswrapper[4751]: E0130 21:16:55.112397 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:55.612381683 +0000 UTC m=+154.358204332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.113664 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" event={"ID":"7b579d29-157b-4ff2-b623-d4af8fd6a8fe","Type":"ContainerStarted","Data":"f7efaa07b3d37ec3362704b99d115e2d1605bd1b2ac82ae46e09fd80b6304048"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.118487 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" event={"ID":"555517ab-ec2d-4534-8cc4-3ecbcdda7a1b","Type":"ContainerStarted","Data":"b0be84163e14aa4348caefe04d4691c0deb5d7eaec59b2fb21fdf5c719b3d810"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.159199 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" event={"ID":"8a52a543-c530-48d9-a046-ac4008df0477","Type":"ContainerStarted","Data":"85f9f12a183ee9ac32edf469f266b83c69141757b64a96e9390b64f35e4d5e44"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.164571 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" event={"ID":"6167bc7b-37d7-493c-93a9-dda69bedad76","Type":"ContainerStarted","Data":"043c1985b11074e7b3354ea056667e2302b6db796cdceab01b534bf7c69daf2f"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.167303 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-m2zrs" event={"ID":"8141131d-95f7-4103-bd2d-24630fc8e9b6","Type":"ContainerStarted","Data":"5b8d1f346d32a1b76defc440996ae2e761261c841faaff7479949fc80899404b"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.173199 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" event={"ID":"ebb4c857-4f54-440f-81d7-74eadc588099","Type":"ContainerStarted","Data":"612f2d6977f92b53466e960bb99d479304cdfbb8a53532347c3c91d7e97452e5"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.173264 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" event={"ID":"ebb4c857-4f54-440f-81d7-74eadc588099","Type":"ContainerStarted","Data":"522d99a2868e8fb58ae72c360b2455cb8b41c33a48074ae5c988e629653b1ce0"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.194812 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tr6kv"] Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.194854 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tw9q7"] Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.197097 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zzk29"] Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.212965 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:55 crc kubenswrapper[4751]: E0130 21:16:55.214610 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:55.714595704 +0000 UTC m=+154.460418353 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:55 crc kubenswrapper[4751]: W0130 21:16:55.284605 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5243a1a5_2eaa_4437_b10e_602439c7c838.slice/crio-645fc7ebe618428269447cd8603adff67691b64d1f9d9c2663bb2b21ba6d290d WatchSource:0}: Error finding container 645fc7ebe618428269447cd8603adff67691b64d1f9d9c2663bb2b21ba6d290d: Status 404 returned error can't find the container with id 645fc7ebe618428269447cd8603adff67691b64d1f9d9c2663bb2b21ba6d290d Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.315320 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:55 crc kubenswrapper[4751]: E0130 21:16:55.315445 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:55.81542893 +0000 UTC m=+154.561251579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.315663 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:55 crc kubenswrapper[4751]: E0130 21:16:55.316667 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:55.81665053 +0000 UTC m=+154.562473179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.394534 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6"] Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.409556 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hrfwj"] Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.417410 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:55 crc kubenswrapper[4751]: E0130 21:16:55.417590 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:55.917565258 +0000 UTC m=+154.663387907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.417767 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:55 crc kubenswrapper[4751]: E0130 21:16:55.418150 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:55.918142592 +0000 UTC m=+154.663965241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:55 crc kubenswrapper[4751]: W0130 21:16:55.420467 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8944fd86_eca1_4882_896d_1cd3faa4b418.slice/crio-80507a826eb117e9f84b1fffadaba4ffe071d8b7206e58ed16ef2e608361f118 WatchSource:0}: Error finding container 80507a826eb117e9f84b1fffadaba4ffe071d8b7206e58ed16ef2e608361f118: Status 404 returned error can't find the container with id 80507a826eb117e9f84b1fffadaba4ffe071d8b7206e58ed16ef2e608361f118 Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.474652 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-l4lnd"] Jan 30 21:16:55 crc kubenswrapper[4751]: W0130 21:16:55.481702 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9930c0b_af24_4e39_b8e6_199a40779aff.slice/crio-8718a91e9e216a528e7a03a96184b721c4dcf45240361d27a287e5d95477615b WatchSource:0}: Error finding container 8718a91e9e216a528e7a03a96184b721c4dcf45240361d27a287e5d95477615b: Status 404 returned error can't find the container with id 8718a91e9e216a528e7a03a96184b721c4dcf45240361d27a287e5d95477615b Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.488496 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6"] Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.492538 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8"] Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.494063 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m"] Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.519376 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:55 crc kubenswrapper[4751]: E0130 21:16:55.521698 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:56.019568984 +0000 UTC m=+154.765391633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.521744 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:55 crc kubenswrapper[4751]: E0130 21:16:55.522269 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:56.022258492 +0000 UTC m=+154.768081141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:55 crc kubenswrapper[4751]: W0130 21:16:55.535594 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a2068cc_b08f_467a_aaf9_a3bbfd99511d.slice/crio-f0e55941fc59c8062ba8f117382b8fcf44abb09f9648c8ecbd8edb156fd3143b WatchSource:0}: Error finding container f0e55941fc59c8062ba8f117382b8fcf44abb09f9648c8ecbd8edb156fd3143b: Status 404 returned error can't find the container with id f0e55941fc59c8062ba8f117382b8fcf44abb09f9648c8ecbd8edb156fd3143b Jan 30 21:16:55 crc kubenswrapper[4751]: W0130 21:16:55.543567 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod357257a0_2b96_4833_84cb_1c4326c34e61.slice/crio-19c67723388659a8252df5187303e97cd1624a168c40ced3350dea12fcdb2591 WatchSource:0}: Error finding container 19c67723388659a8252df5187303e97cd1624a168c40ced3350dea12fcdb2591: Status 404 returned error can't find the container with id 19c67723388659a8252df5187303e97cd1624a168c40ced3350dea12fcdb2591 Jan 30 21:16:55 crc kubenswrapper[4751]: W0130 21:16:55.545532 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcabe4321_bcb6_4d9e_905f_ab26bbc11b86.slice/crio-cf0e82f604a527d1b471a6f5e1f6957c9a7ed85b1aa22eddd59941f61465a91e WatchSource:0}: Error finding container cf0e82f604a527d1b471a6f5e1f6957c9a7ed85b1aa22eddd59941f61465a91e: Status 404 returned error can't find the container with id cf0e82f604a527d1b471a6f5e1f6957c9a7ed85b1aa22eddd59941f61465a91e Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.589494 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz"] Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.609470 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vkfk8"] Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.614978 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qthvh"] Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.622930 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:55 crc kubenswrapper[4751]: E0130 21:16:55.623426 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:56.123410306 +0000 UTC m=+154.869232955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:55 crc kubenswrapper[4751]: W0130 21:16:55.670063 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd4cdd19_fcd5_4fa7_835b_f2c233746297.slice/crio-5f65da52284fb7569623f0439e2b42fd9889ef7afc3035696cce46a3f818d795 WatchSource:0}: Error finding container 5f65da52284fb7569623f0439e2b42fd9889ef7afc3035696cce46a3f818d795: Status 404 returned error can't find the container with id 5f65da52284fb7569623f0439e2b42fd9889ef7afc3035696cce46a3f818d795 Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.724040 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:55 crc kubenswrapper[4751]: E0130 21:16:55.724352 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:56.224341124 +0000 UTC m=+154.970163773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.825913 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:55 crc kubenswrapper[4751]: E0130 21:16:55.826430 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:56.32641601 +0000 UTC m=+155.072238659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.827669 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.858499 4751 patch_prober.go:28] interesting pod/router-default-5444994796-w42cs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:16:55 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 30 21:16:55 crc kubenswrapper[4751]: [+]process-running ok Jan 30 21:16:55 crc kubenswrapper[4751]: healthz check failed Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.858546 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w42cs" podUID="dfb29a82-8be0-4219-81b1-fecfcb4e1061" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.930311 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:55 crc kubenswrapper[4751]: E0130 21:16:55.930895 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:56.430883568 +0000 UTC m=+155.176706217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.956668 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-30 21:11:54 +0000 UTC, rotation deadline is 2026-11-06 20:11:49.470465005 +0000 UTC Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.956722 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6718h54m53.513745139s for next certificate rotation Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.969804 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" podStartSLOduration=127.969786988 podStartE2EDuration="2m7.969786988s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:55.942541275 +0000 UTC m=+154.688363924" watchObservedRunningTime="2026-01-30 21:16:55.969786988 +0000 UTC m=+154.715609637" Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.977273 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-7bw65" podStartSLOduration=127.977255168 podStartE2EDuration="2m7.977255168s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:55.96947238 +0000 UTC m=+154.715295029" watchObservedRunningTime="2026-01-30 21:16:55.977255168 +0000 UTC m=+154.723077817" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.007644 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" podStartSLOduration=127.007628221 podStartE2EDuration="2m7.007628221s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.006451571 +0000 UTC m=+154.752274220" watchObservedRunningTime="2026-01-30 21:16:56.007628221 +0000 UTC m=+154.753450870" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.066455 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:56 crc kubenswrapper[4751]: E0130 21:16:56.066788 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:56.566773046 +0000 UTC m=+155.312595695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.116758 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" podStartSLOduration=128.116736067 podStartE2EDuration="2m8.116736067s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.116357578 +0000 UTC m=+154.862180227" watchObservedRunningTime="2026-01-30 21:16:56.116736067 +0000 UTC m=+154.862558716" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.148695 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" podStartSLOduration=128.14868071 podStartE2EDuration="2m8.14868071s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.147654634 +0000 UTC m=+154.893477283" watchObservedRunningTime="2026-01-30 21:16:56.14868071 +0000 UTC m=+154.894503359" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.171608 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:56 crc kubenswrapper[4751]: E0130 21:16:56.171926 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:56.67191302 +0000 UTC m=+155.417735669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.225722 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" event={"ID":"aea78078-eab1-4c82-b072-e6b65f959815","Type":"ContainerStarted","Data":"e40e173cc0f51a6b097e514b59afc1940f0317e68c359b09cbfe3bf288df4d30"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.228870 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" event={"ID":"542e69b1-7290-4693-b85b-5c9566314a51","Type":"ContainerStarted","Data":"1844b4d988b49509adb40ff75d4d970ed70ebfe468e66c881ff3687c883f5a60"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.228921 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" event={"ID":"542e69b1-7290-4693-b85b-5c9566314a51","Type":"ContainerStarted","Data":"9494c4fdb0e6e31f5baf4b51551b2df0577ec8cd54500dab8c9b232d142be352"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.235366 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vkfk8" event={"ID":"dd4cdd19-fcd5-4fa7-835b-f2c233746297","Type":"ContainerStarted","Data":"5f65da52284fb7569623f0439e2b42fd9889ef7afc3035696cce46a3f818d795"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.242858 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" podStartSLOduration=128.242842446 podStartE2EDuration="2m8.242842446s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.241920062 +0000 UTC m=+154.987742711" watchObservedRunningTime="2026-01-30 21:16:56.242842446 +0000 UTC m=+154.988665095" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.262190 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8" event={"ID":"357257a0-2b96-4833-84cb-1c4326c34e61","Type":"ContainerStarted","Data":"9643936ded80d717e1c8cabdbac4b86afe7dffa2980b0d30f1cc6e306cb118de"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.262246 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8" event={"ID":"357257a0-2b96-4833-84cb-1c4326c34e61","Type":"ContainerStarted","Data":"19c67723388659a8252df5187303e97cd1624a168c40ced3350dea12fcdb2591"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.270225 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hrfwj" event={"ID":"a9930c0b-af24-4e39-b8e6-199a40779aff","Type":"ContainerStarted","Data":"8718a91e9e216a528e7a03a96184b721c4dcf45240361d27a287e5d95477615b"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.271894 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-6gckm" podStartSLOduration=128.271877884 podStartE2EDuration="2m8.271877884s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.270379497 +0000 UTC m=+155.016202146" watchObservedRunningTime="2026-01-30 21:16:56.271877884 +0000 UTC m=+155.017700533" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.272639 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:56 crc kubenswrapper[4751]: E0130 21:16:56.273600 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:56.773584888 +0000 UTC m=+155.519407537 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.277635 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" event={"ID":"7aae80f4-df3d-4545-8a9b-5a840e379b65","Type":"ContainerStarted","Data":"5a2da5db1537dd1e0caeb21de13dbfe7277a6c921b5656ebef43be3650a0aecf"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.294090 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-w42cs" podStartSLOduration=128.294075899 podStartE2EDuration="2m8.294075899s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.293907215 +0000 UTC m=+155.039729864" watchObservedRunningTime="2026-01-30 21:16:56.294075899 +0000 UTC m=+155.039898538" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.296721 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" event={"ID":"ac8a7752-ba4b-41eb-a085-b493f6876beb","Type":"ContainerStarted","Data":"88fd38dffccf9a74bc43571a7f00a9c92c0f29620e4e59cc8b050ef11a744029"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.317494 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" event={"ID":"4d9e5b29-c71c-4129-bd91-ccb81940c815","Type":"ContainerStarted","Data":"96e1afb3b0419908f40cb60623fd27af3b84f4d53f78c69eddb9b5e5b22e2c35"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.317537 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" event={"ID":"4d9e5b29-c71c-4129-bd91-ccb81940c815","Type":"ContainerStarted","Data":"97af7a7c37ea36091f979446aceb409c1476e9f3a415108218fcd95019c623c8"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.334987 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-8l2v5" podStartSLOduration=128.33497243 podStartE2EDuration="2m8.33497243s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.334800656 +0000 UTC m=+155.080623305" watchObservedRunningTime="2026-01-30 21:16:56.33497243 +0000 UTC m=+155.080795079" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.348118 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-m2zrs" event={"ID":"8141131d-95f7-4103-bd2d-24630fc8e9b6","Type":"ContainerStarted","Data":"4c777cc15bebf50d178b32d29256e0b7dd80cf9821f544ec2e234e019d95711a"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.354149 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" event={"ID":"8a52a543-c530-48d9-a046-ac4008df0477","Type":"ContainerStarted","Data":"c055a298ab4ac470125ea52e0402fa36c68ae7885b742532ed40b326547b365e"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.355079 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.374062 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:56 crc kubenswrapper[4751]: E0130 21:16:56.376627 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:56.876611048 +0000 UTC m=+155.622433697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.385936 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" event={"ID":"6167bc7b-37d7-493c-93a9-dda69bedad76","Type":"ContainerStarted","Data":"7a29fc58d45eaf746538e78896087f19c530206258fd49eedc7a7f2a4618055f"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.387747 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.399901 4751 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-6dcxn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" start-of-body= Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.399940 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" podUID="8a52a543-c530-48d9-a046-ac4008df0477" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.419505 4751 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-z2l88 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.419562 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" podUID="6167bc7b-37d7-493c-93a9-dda69bedad76" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.426830 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mp5g5" event={"ID":"80a7c9c5-51fd-457c-a16b-c7ad90f92811","Type":"ContainerStarted","Data":"f18a9550c2931bce9d53f0bac018af4c356d7b3a3ffa1a84e39a40722656f4c1"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.426888 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mp5g5" event={"ID":"80a7c9c5-51fd-457c-a16b-c7ad90f92811","Type":"ContainerStarted","Data":"cfcf61badfda05d6791c91d543c6e454cd556a505b5b99f7a6509fecf9ef1b71"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.451007 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" podStartSLOduration=127.450994511 podStartE2EDuration="2m7.450994511s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.399614645 +0000 UTC m=+155.145437284" watchObservedRunningTime="2026-01-30 21:16:56.450994511 +0000 UTC m=+155.196817160" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.467809 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" event={"ID":"cabe4321-bcb6-4d9e-905f-ab26bbc11b86","Type":"ContainerStarted","Data":"cf0e82f604a527d1b471a6f5e1f6957c9a7ed85b1aa22eddd59941f61465a91e"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.482199 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:56 crc kubenswrapper[4751]: E0130 21:16:56.482367 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:56.982352009 +0000 UTC m=+155.728174658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.482667 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:56 crc kubenswrapper[4751]: E0130 21:16:56.483045 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:56.983038396 +0000 UTC m=+155.728861045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.491003 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" podStartSLOduration=128.490985919 podStartE2EDuration="2m8.490985919s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.451577846 +0000 UTC m=+155.197400495" watchObservedRunningTime="2026-01-30 21:16:56.490985919 +0000 UTC m=+155.236808568" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.491359 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-m2zrs" podStartSLOduration=5.491355428 podStartE2EDuration="5.491355428s" podCreationTimestamp="2026-01-30 21:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.489579463 +0000 UTC m=+155.235402102" watchObservedRunningTime="2026-01-30 21:16:56.491355428 +0000 UTC m=+155.237178077" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.522155 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-l4lnd" event={"ID":"c1f866c2-11a6-4c9b-8d42-54e5f0a18195","Type":"ContainerStarted","Data":"00340f6885b4ea09edb9938f2eaf4cb4b1bb4178081dd8c51a8ebeec49aede6b"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.540985 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" event={"ID":"9c37a69a-9a13-400f-bfff-0886b6062725","Type":"ContainerStarted","Data":"9b04f80bd90892178f203a15d0c0e79a01004082e993249f204ce6be1d333b4f"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.567969 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" event={"ID":"20cd63ce-b8cf-45fa-9d89-d917cff2894b","Type":"ContainerStarted","Data":"709372be1327def5f43b7802a83ef2673dfaefa928d2bef73c5b5ac6bc7f6656"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.568263 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" event={"ID":"20cd63ce-b8cf-45fa-9d89-d917cff2894b","Type":"ContainerStarted","Data":"c01f8af6fedfd6718dc8ed93869d4e4dd7fb85394b5774e9ceeadd79f093c275"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.568274 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" event={"ID":"20cd63ce-b8cf-45fa-9d89-d917cff2894b","Type":"ContainerStarted","Data":"15c959dfeaee5a5289df5b5e9993a2d074e0b2032dc92fdab6515e23455331c4"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.580538 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-mp5g5" podStartSLOduration=128.580515787 podStartE2EDuration="2m8.580515787s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.525159249 +0000 UTC m=+155.270981898" watchObservedRunningTime="2026-01-30 21:16:56.580515787 +0000 UTC m=+155.326338436" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.587588 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:56 crc kubenswrapper[4751]: E0130 21:16:56.590413 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:57.090375618 +0000 UTC m=+155.836198277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.592076 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2hvtm" event={"ID":"236e4954-0baf-4d9e-b36f-eed37707af26","Type":"ContainerStarted","Data":"03868f220c052b3c6459f165fd26d837b0755e6dbe97d73e57fd6aae5d2df2d7"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.608068 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" event={"ID":"130dae88-caa3-4e75-b3fc-d3b6dcd5b577","Type":"ContainerStarted","Data":"e2cfc2131c1e2622f48bd285e09383a7f2fd25ccf7150f6c4a9e51294735c7f6"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.608113 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" event={"ID":"130dae88-caa3-4e75-b3fc-d3b6dcd5b577","Type":"ContainerStarted","Data":"874322af55667e4c057f700674220f29bd239a5ba05d42ebeade5c54fd297252"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.608915 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.610356 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8" podStartSLOduration=127.610334896 podStartE2EDuration="2m7.610334896s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.601797389 +0000 UTC m=+155.347620028" watchObservedRunningTime="2026-01-30 21:16:56.610334896 +0000 UTC m=+155.356157545" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.636665 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" event={"ID":"c79ee24d-6dc8-4c72-911a-9ea6810a9f9a","Type":"ContainerStarted","Data":"6f766816f316b82da30a3fc3ad8967029a9b6b92bf740e55f8a941bab728a527"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.636711 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" event={"ID":"c79ee24d-6dc8-4c72-911a-9ea6810a9f9a","Type":"ContainerStarted","Data":"221ec0493271241afc9442c42e1ae75b2e2a81236db1a2e00a1a6a38c9fce188"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.649879 4751 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-mw25p container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.649927 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" podUID="130dae88-caa3-4e75-b3fc-d3b6dcd5b577" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.659105 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" event={"ID":"d5d92565-846d-43a6-92e2-02351fec2f63","Type":"ContainerStarted","Data":"1b74f4ded95aaf79d5704f77cee527595aa4ec83d7d477e864a8293f5ef8f596"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.663386 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" podStartSLOduration=128.663365975 podStartE2EDuration="2m8.663365975s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.659160468 +0000 UTC m=+155.404983117" watchObservedRunningTime="2026-01-30 21:16:56.663365975 +0000 UTC m=+155.409188624" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.694081 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" event={"ID":"8944fd86-eca1-4882-896d-1cd3faa4b418","Type":"ContainerStarted","Data":"d9b267654323352bb0251bdfb2dfad2c601b310fe1e8a971b0087271afb9896a"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.694139 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" event={"ID":"8944fd86-eca1-4882-896d-1cd3faa4b418","Type":"ContainerStarted","Data":"80507a826eb117e9f84b1fffadaba4ffe071d8b7206e58ed16ef2e608361f118"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.694861 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.698747 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" podStartSLOduration=128.698725004 podStartE2EDuration="2m8.698725004s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.694520698 +0000 UTC m=+155.440343347" watchObservedRunningTime="2026-01-30 21:16:56.698725004 +0000 UTC m=+155.444547653" Jan 30 21:16:56 crc kubenswrapper[4751]: E0130 21:16:56.700437 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:57.200418137 +0000 UTC m=+155.946240816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.712762 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-plkp9" event={"ID":"6d872f03-d4d0-49bc-9758-05060035dafa","Type":"ContainerStarted","Data":"f19059a58cb3c51eeb050b443db58c71f664e505dd90a7d658b0a494d918d0c7"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.714857 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" event={"ID":"af4fa723-4cc5-4fa1-9162-fa20b958fa29","Type":"ContainerStarted","Data":"d7f2abaa8247b2b0f1640d90c2d63f187961c6c0db223265431fa28f47844df5"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.727857 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" event={"ID":"65e79489-5e6b-421c-8019-b1d5161a0341","Type":"ContainerStarted","Data":"0f075caa6b96aeea7b26e5dd9903f8cd6841c2c8cc07081290c2d29c12a5bc8a"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.740674 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb" event={"ID":"8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6","Type":"ContainerStarted","Data":"1a8f224bfad774a94e3a431422839cc8cd5a59af5150c7aa3773c22ce268a7d2"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.740738 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb" event={"ID":"8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6","Type":"ContainerStarted","Data":"367f0b1c05ba7ca0b98e36f009034070defc27427e06a8c4978262a40c4dfa48"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.747472 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" podStartSLOduration=127.747453294 podStartE2EDuration="2m7.747453294s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.741443352 +0000 UTC m=+155.487266011" watchObservedRunningTime="2026-01-30 21:16:56.747453294 +0000 UTC m=+155.493275943" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.756975 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" event={"ID":"5243a1a5-2eaa-4437-b10e-602439c7c838","Type":"ContainerStarted","Data":"b77d62c140b69b82d5cbf6eb5008135711a6580ea5f979e5e8815b4aa184e76b"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.757017 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" event={"ID":"5243a1a5-2eaa-4437-b10e-602439c7c838","Type":"ContainerStarted","Data":"645fc7ebe618428269447cd8603adff67691b64d1f9d9c2663bb2b21ba6d290d"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.757787 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.759517 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" event={"ID":"555517ab-ec2d-4534-8cc4-3ecbcdda7a1b","Type":"ContainerStarted","Data":"9ad457c424e2d83475a43a4241289f140884cfa929ad927ff45d123733d5d732"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.775129 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" event={"ID":"cc9ed63a-23a2-4b50-a290-0409ff14fd95","Type":"ContainerStarted","Data":"e542d53fa8d38b44c5415e62c079644bf8fb944ad32fd45452254fbadf2caa51"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.775475 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" event={"ID":"cc9ed63a-23a2-4b50-a290-0409ff14fd95","Type":"ContainerStarted","Data":"b6ac56afbe946ed8a3114588a856c9022e503d0feb1988aaa10f041f9dcbf7e4"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.780089 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" event={"ID":"a50ebf9b-a11e-47ac-828c-f1858be195d7","Type":"ContainerStarted","Data":"6b42217edb17f579b33e8f78ad708c6bc0d1aa2d441939bea28980480fa3e4b1"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.780111 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" event={"ID":"a50ebf9b-a11e-47ac-828c-f1858be195d7","Type":"ContainerStarted","Data":"b72d85bf7b25e672b8bd747daf330f9431c1329f8bcf19a5adc8f7d9dffafb40"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.780361 4751 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tr6kv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.780392 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" podUID="5243a1a5-2eaa-4437-b10e-602439c7c838" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.782032 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" event={"ID":"3a2068cc-b08f-467a-aaf9-a3bbfd99511d","Type":"ContainerStarted","Data":"f0e55941fc59c8062ba8f117382b8fcf44abb09f9648c8ecbd8edb156fd3143b"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.785269 4751 patch_prober.go:28] interesting pod/downloads-7954f5f757-8l2v5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.785312 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8l2v5" podUID="21dc9dc0-702d-49a7-baed-f8e70f6867f3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.793655 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" podStartSLOduration=127.793624139 podStartE2EDuration="2m7.793624139s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.79054823 +0000 UTC m=+155.536370879" watchObservedRunningTime="2026-01-30 21:16:56.793624139 +0000 UTC m=+155.539446788" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.795097 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.798693 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.799177 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:56 crc kubenswrapper[4751]: E0130 21:16:56.799391 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:57.299375265 +0000 UTC m=+156.045197914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.799545 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:56 crc kubenswrapper[4751]: E0130 21:16:56.801853 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:57.301844978 +0000 UTC m=+156.047667627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.815720 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.831237 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" podStartSLOduration=128.831224316 podStartE2EDuration="2m8.831224316s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.830316452 +0000 UTC m=+155.576139101" watchObservedRunningTime="2026-01-30 21:16:56.831224316 +0000 UTC m=+155.577046965" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.838126 4751 patch_prober.go:28] interesting pod/router-default-5444994796-w42cs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:16:56 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 30 21:16:56 crc kubenswrapper[4751]: [+]process-running ok Jan 30 21:16:56 crc kubenswrapper[4751]: healthz check failed Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.838191 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w42cs" podUID="dfb29a82-8be0-4219-81b1-fecfcb4e1061" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.903653 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" podStartSLOduration=127.903631988 podStartE2EDuration="2m7.903631988s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.869674783 +0000 UTC m=+155.615497432" watchObservedRunningTime="2026-01-30 21:16:56.903631988 +0000 UTC m=+155.649454637" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.904306 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:56 crc kubenswrapper[4751]: E0130 21:16:56.904427 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:57.404406158 +0000 UTC m=+156.150228807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.904947 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" podStartSLOduration=127.904939861 podStartE2EDuration="2m7.904939861s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.904206883 +0000 UTC m=+155.650029532" watchObservedRunningTime="2026-01-30 21:16:56.904939861 +0000 UTC m=+155.650762510" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.906385 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:56 crc kubenswrapper[4751]: E0130 21:16:56.910008 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:57.40999564 +0000 UTC m=+156.155818289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.013745 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.014203 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" podStartSLOduration=128.014192541 podStartE2EDuration="2m8.014192541s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.947643157 +0000 UTC m=+155.693465796" watchObservedRunningTime="2026-01-30 21:16:57.014192541 +0000 UTC m=+155.760015190" Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.014314 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:57.514303384 +0000 UTC m=+156.260126033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.014793 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" podStartSLOduration=129.014788706 podStartE2EDuration="2m9.014788706s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:57.011527253 +0000 UTC m=+155.757349902" watchObservedRunningTime="2026-01-30 21:16:57.014788706 +0000 UTC m=+155.760611355" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.081875 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" podStartSLOduration=129.081849472 podStartE2EDuration="2m9.081849472s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:57.050634308 +0000 UTC m=+155.796456957" watchObservedRunningTime="2026-01-30 21:16:57.081849472 +0000 UTC m=+155.827672121" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.082803 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" podStartSLOduration=117.082797077 podStartE2EDuration="1m57.082797077s" podCreationTimestamp="2026-01-30 21:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:57.081732459 +0000 UTC m=+155.827555108" watchObservedRunningTime="2026-01-30 21:16:57.082797077 +0000 UTC m=+155.828619726" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.100889 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" podStartSLOduration=129.100857646 podStartE2EDuration="2m9.100857646s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:57.097684235 +0000 UTC m=+155.843506884" watchObservedRunningTime="2026-01-30 21:16:57.100857646 +0000 UTC m=+155.846680295" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.115671 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.115917 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:57.615906988 +0000 UTC m=+156.361729637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.160493 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" podStartSLOduration=128.160466112 podStartE2EDuration="2m8.160466112s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:57.15998374 +0000 UTC m=+155.905806389" watchObservedRunningTime="2026-01-30 21:16:57.160466112 +0000 UTC m=+155.906288751" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.162555 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb" podStartSLOduration=129.162547935 podStartE2EDuration="2m9.162547935s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:57.125573945 +0000 UTC m=+155.871396594" watchObservedRunningTime="2026-01-30 21:16:57.162547935 +0000 UTC m=+155.908370584" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.183622 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-2hvtm" podStartSLOduration=6.183609112 podStartE2EDuration="6.183609112s" podCreationTimestamp="2026-01-30 21:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:57.182246257 +0000 UTC m=+155.928068906" watchObservedRunningTime="2026-01-30 21:16:57.183609112 +0000 UTC m=+155.929431761" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.218399 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.218530 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:57.718510869 +0000 UTC m=+156.464333518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.218626 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.218877 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:57.718868948 +0000 UTC m=+156.464691587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.319368 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.320548 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:57.820532165 +0000 UTC m=+156.566354814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.426064 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.426357 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:57.926344837 +0000 UTC m=+156.672167486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.527289 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.527577 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.027561682 +0000 UTC m=+156.773384331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.628319 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.628579 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.128566672 +0000 UTC m=+156.874389321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.729426 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.729605 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.229579432 +0000 UTC m=+156.975402081 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.729659 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.729970 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.229960633 +0000 UTC m=+156.975783282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.789571 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" event={"ID":"4d9e5b29-c71c-4129-bd91-ccb81940c815","Type":"ContainerStarted","Data":"360186e3f9fbdb7bb5483ae1bbca46098ef76ca719be5753d37c401e01f09c3c"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.791404 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" event={"ID":"3a2068cc-b08f-467a-aaf9-a3bbfd99511d","Type":"ContainerStarted","Data":"1e13584c446211f465f2d1ac4a5de34086db78494ff1304dc396037fdb0fe0b1"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.793155 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vkfk8" event={"ID":"dd4cdd19-fcd5-4fa7-835b-f2c233746297","Type":"ContainerStarted","Data":"5b2a4984dfbf89b0ae73e2743675b7a8623b010a8f7576d244f001eeeefbfb9d"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.793183 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vkfk8" event={"ID":"dd4cdd19-fcd5-4fa7-835b-f2c233746297","Type":"ContainerStarted","Data":"daa4aa43dd5da7fb3132eae020ee2ac3bab4429041936615bd028d8234efa696"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.794665 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hrfwj" event={"ID":"a9930c0b-af24-4e39-b8e6-199a40779aff","Type":"ContainerStarted","Data":"2b380717e2151d03c3e094cfe7519c12e0b58dffabdc518fdac1f169cb3889be"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.794704 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hrfwj" event={"ID":"a9930c0b-af24-4e39-b8e6-199a40779aff","Type":"ContainerStarted","Data":"faff2b0bb4864f702d447f8063fbfcb7134b5d671e7233685154bd9c375804f9"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.794744 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-hrfwj" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.795731 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" event={"ID":"d5d92565-846d-43a6-92e2-02351fec2f63","Type":"ContainerStarted","Data":"113dd0733a0d583b9bf7cd1daf32704bde6dbfc2df9582cc877fd85f2ca4bd07"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.797914 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-plkp9" event={"ID":"6d872f03-d4d0-49bc-9758-05060035dafa","Type":"ContainerStarted","Data":"c3fa6c4e3647efbc9a9c4ba6681fa6e83dba52be62bc5e47071415454e50dc07"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.798892 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" event={"ID":"ac8a7752-ba4b-41eb-a085-b493f6876beb","Type":"ContainerStarted","Data":"9fe6859ff764b2f41d6144031bd1f679edfc6f8944e225f3b92dfe1871d75e28"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.800333 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" event={"ID":"8944fd86-eca1-4882-896d-1cd3faa4b418","Type":"ContainerStarted","Data":"0e5532a413833208984b0d47d0ea538fb80d0562947d9be06f62fd7a9745fccd"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.800485 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.802302 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" event={"ID":"a50ebf9b-a11e-47ac-828c-f1858be195d7","Type":"ContainerStarted","Data":"f831c2bf0099d351025e6d632d19e9763440cb68fd1ceb4d79eade95dc2c8c24"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.804632 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" event={"ID":"cabe4321-bcb6-4d9e-905f-ab26bbc11b86","Type":"ContainerStarted","Data":"e68ba3e2efbe19e2c938e0a4c80a3c9117953b991aa34ce826bb49d53a5d4d54"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.804792 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.807131 4751 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nldk6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.807197 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" podUID="cabe4321-bcb6-4d9e-905f-ab26bbc11b86" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.807784 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-l4lnd" event={"ID":"c1f866c2-11a6-4c9b-8d42-54e5f0a18195","Type":"ContainerStarted","Data":"ffdd2816ecc76f6e2e1c431df20c09cedf555e2412af905682bdfa0dfa33a8aa"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.807826 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-l4lnd" event={"ID":"c1f866c2-11a6-4c9b-8d42-54e5f0a18195","Type":"ContainerStarted","Data":"d5595fc9fc0e8edb4853b44cce578c262b541d49bf17b4d539ff3c78acd649bb"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.809510 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" event={"ID":"7aae80f4-df3d-4545-8a9b-5a840e379b65","Type":"ContainerStarted","Data":"4a3ee9dcc1747c0610c42dc8198b7def506f178d1490eea148f0037ff4e5932c"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.810005 4751 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tr6kv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.810044 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" podUID="5243a1a5-2eaa-4437-b10e-602439c7c838" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.816043 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.830447 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.830612 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.330571462 +0000 UTC m=+157.076394111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.831032 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.831572 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.331554347 +0000 UTC m=+157.077377116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.832514 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.832879 4751 patch_prober.go:28] interesting pod/router-default-5444994796-w42cs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:16:57 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 30 21:16:57 crc kubenswrapper[4751]: [+]process-running ok Jan 30 21:16:57 crc kubenswrapper[4751]: healthz check failed Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.832929 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w42cs" podUID="dfb29a82-8be0-4219-81b1-fecfcb4e1061" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.866756 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.870433 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.888894 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" podStartSLOduration=128.888875995 podStartE2EDuration="2m8.888875995s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:57.887311436 +0000 UTC m=+156.633134085" watchObservedRunningTime="2026-01-30 21:16:57.888875995 +0000 UTC m=+156.634698644" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.932104 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.932287 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.432262069 +0000 UTC m=+157.178084718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.937470 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.938039 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.438025356 +0000 UTC m=+157.183848005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.000478 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" podStartSLOduration=129.000462734 podStartE2EDuration="2m9.000462734s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:57.996597776 +0000 UTC m=+156.742420425" watchObservedRunningTime="2026-01-30 21:16:58.000462734 +0000 UTC m=+156.746285393" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.039169 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-plkp9" podStartSLOduration=130.039152419 podStartE2EDuration="2m10.039152419s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:58.038697397 +0000 UTC m=+156.784520046" watchObservedRunningTime="2026-01-30 21:16:58.039152419 +0000 UTC m=+156.784975068" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.039306 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.039552 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.539538649 +0000 UTC m=+157.285361288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.072376 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" podStartSLOduration=129.072360864 podStartE2EDuration="2m9.072360864s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:58.069746147 +0000 UTC m=+156.815568796" watchObservedRunningTime="2026-01-30 21:16:58.072360864 +0000 UTC m=+156.818183513" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.104470 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vkfk8" podStartSLOduration=129.10445363 podStartE2EDuration="2m9.10445363s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:58.103598009 +0000 UTC m=+156.849420658" watchObservedRunningTime="2026-01-30 21:16:58.10445363 +0000 UTC m=+156.850276279" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.140183 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.140517 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.640503157 +0000 UTC m=+157.386325806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.247791 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.248209 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.748188738 +0000 UTC m=+157.494011387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.248248 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.248604 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.263815 4751 patch_prober.go:28] interesting pod/apiserver-76f77b778f-plkp9 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.263881 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-plkp9" podUID="6d872f03-d4d0-49bc-9758-05060035dafa" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.282158 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" podStartSLOduration=130.282142611 podStartE2EDuration="2m10.282142611s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:58.245451448 +0000 UTC m=+156.991274097" watchObservedRunningTime="2026-01-30 21:16:58.282142611 +0000 UTC m=+157.027965260" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.320972 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" podStartSLOduration=129.320955599 podStartE2EDuration="2m9.320955599s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:58.283483505 +0000 UTC m=+157.029306154" watchObservedRunningTime="2026-01-30 21:16:58.320955599 +0000 UTC m=+157.066778248" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.321136 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-hrfwj" podStartSLOduration=7.321133644 podStartE2EDuration="7.321133644s" podCreationTimestamp="2026-01-30 21:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:58.31944045 +0000 UTC m=+157.065263099" watchObservedRunningTime="2026-01-30 21:16:58.321133644 +0000 UTC m=+157.066956293" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.348737 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.349235 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.849223638 +0000 UTC m=+157.595046287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.392206 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" podStartSLOduration=130.392187441 podStartE2EDuration="2m10.392187441s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:58.347289789 +0000 UTC m=+157.093112438" watchObservedRunningTime="2026-01-30 21:16:58.392187441 +0000 UTC m=+157.138010090" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.393148 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-l4lnd" podStartSLOduration=129.393143105 podStartE2EDuration="2m9.393143105s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:58.389956614 +0000 UTC m=+157.135779263" watchObservedRunningTime="2026-01-30 21:16:58.393143105 +0000 UTC m=+157.138965754" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.441705 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" podStartSLOduration=130.441686621 podStartE2EDuration="2m10.441686621s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:58.439763382 +0000 UTC m=+157.185586031" watchObservedRunningTime="2026-01-30 21:16:58.441686621 +0000 UTC m=+157.187509270" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.449894 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.450052 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.950028953 +0000 UTC m=+157.695851592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.450404 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.450699 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.95069172 +0000 UTC m=+157.696514369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.552015 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.552133 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.05210396 +0000 UTC m=+157.797926609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.552562 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.552935 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.05292383 +0000 UTC m=+157.798746479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.653692 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.653834 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.153810387 +0000 UTC m=+157.899633036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.654022 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.654342 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.154319081 +0000 UTC m=+157.900141730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.689407 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.689463 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.700097 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.755079 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.755409 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.255382802 +0000 UTC m=+158.001205451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.826284 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" event={"ID":"ac8a7752-ba4b-41eb-a085-b493f6876beb","Type":"ContainerStarted","Data":"f7de1529cb1b6907cd56542193d3fe1dcd309fb7be492dd353b1a2bdba56394c"} Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.829478 4751 patch_prober.go:28] interesting pod/router-default-5444994796-w42cs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:16:58 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 30 21:16:58 crc kubenswrapper[4751]: [+]process-running ok Jan 30 21:16:58 crc kubenswrapper[4751]: healthz check failed Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.829689 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w42cs" podUID="dfb29a82-8be0-4219-81b1-fecfcb4e1061" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.836283 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.846770 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.860147 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.860725 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.360712562 +0000 UTC m=+158.106535211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.961830 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.962014 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.461985628 +0000 UTC m=+158.207808277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.962254 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.964791 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.464776629 +0000 UTC m=+158.210599278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.063866 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:59 crc kubenswrapper[4751]: E0130 21:16:59.064229 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.56421504 +0000 UTC m=+158.310037689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.157934 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-54ffx"] Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.158777 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.165347 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:59 crc kubenswrapper[4751]: E0130 21:16:59.165659 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.6656425 +0000 UTC m=+158.411465149 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.171168 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.183922 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-54ffx"] Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.266342 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:59 crc kubenswrapper[4751]: E0130 21:16:59.266516 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.766490556 +0000 UTC m=+158.512313195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.266548 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a59ef52d-2f47-42ac-a233-0285be317cc9-utilities\") pod \"certified-operators-54ffx\" (UID: \"a59ef52d-2f47-42ac-a233-0285be317cc9\") " pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.266571 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tng6d\" (UniqueName: \"kubernetes.io/projected/a59ef52d-2f47-42ac-a233-0285be317cc9-kube-api-access-tng6d\") pod \"certified-operators-54ffx\" (UID: \"a59ef52d-2f47-42ac-a233-0285be317cc9\") " pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.266660 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a59ef52d-2f47-42ac-a233-0285be317cc9-catalog-content\") pod \"certified-operators-54ffx\" (UID: \"a59ef52d-2f47-42ac-a233-0285be317cc9\") " pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.266700 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:59 crc kubenswrapper[4751]: E0130 21:16:59.266952 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.766938977 +0000 UTC m=+158.512761626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.338793 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wvvq8"] Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.339775 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.345100 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.367455 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.367645 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a59ef52d-2f47-42ac-a233-0285be317cc9-catalog-content\") pod \"certified-operators-54ffx\" (UID: \"a59ef52d-2f47-42ac-a233-0285be317cc9\") " pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.367702 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a59ef52d-2f47-42ac-a233-0285be317cc9-utilities\") pod \"certified-operators-54ffx\" (UID: \"a59ef52d-2f47-42ac-a233-0285be317cc9\") " pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.367805 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tng6d\" (UniqueName: \"kubernetes.io/projected/a59ef52d-2f47-42ac-a233-0285be317cc9-kube-api-access-tng6d\") pod \"certified-operators-54ffx\" (UID: \"a59ef52d-2f47-42ac-a233-0285be317cc9\") " pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:16:59 crc kubenswrapper[4751]: E0130 21:16:59.368147 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.868128621 +0000 UTC m=+158.613951270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.369052 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a59ef52d-2f47-42ac-a233-0285be317cc9-catalog-content\") pod \"certified-operators-54ffx\" (UID: \"a59ef52d-2f47-42ac-a233-0285be317cc9\") " pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.369260 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a59ef52d-2f47-42ac-a233-0285be317cc9-utilities\") pod \"certified-operators-54ffx\" (UID: \"a59ef52d-2f47-42ac-a233-0285be317cc9\") " pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.370098 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wvvq8"] Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.431501 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tng6d\" (UniqueName: \"kubernetes.io/projected/a59ef52d-2f47-42ac-a233-0285be317cc9-kube-api-access-tng6d\") pod \"certified-operators-54ffx\" (UID: \"a59ef52d-2f47-42ac-a233-0285be317cc9\") " pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.469292 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5de678c2-f43a-44fa-ab58-259f765c3e31-utilities\") pod \"community-operators-wvvq8\" (UID: \"5de678c2-f43a-44fa-ab58-259f765c3e31\") " pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.469354 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5de678c2-f43a-44fa-ab58-259f765c3e31-catalog-content\") pod \"community-operators-wvvq8\" (UID: \"5de678c2-f43a-44fa-ab58-259f765c3e31\") " pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.469400 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.469459 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh7pr\" (UniqueName: \"kubernetes.io/projected/5de678c2-f43a-44fa-ab58-259f765c3e31-kube-api-access-dh7pr\") pod \"community-operators-wvvq8\" (UID: \"5de678c2-f43a-44fa-ab58-259f765c3e31\") " pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:16:59 crc kubenswrapper[4751]: E0130 21:16:59.469765 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.969749068 +0000 UTC m=+158.715571717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.471114 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.526359 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b6k7d"] Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.527462 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.548283 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b6k7d"] Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.573507 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.573760 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh7pr\" (UniqueName: \"kubernetes.io/projected/5de678c2-f43a-44fa-ab58-259f765c3e31-kube-api-access-dh7pr\") pod \"community-operators-wvvq8\" (UID: \"5de678c2-f43a-44fa-ab58-259f765c3e31\") " pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.573796 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj5d8\" (UniqueName: \"kubernetes.io/projected/41823bd1-3ae0-4f41-847e-d0b35047047c-kube-api-access-nj5d8\") pod \"certified-operators-b6k7d\" (UID: \"41823bd1-3ae0-4f41-847e-d0b35047047c\") " pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.573820 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5de678c2-f43a-44fa-ab58-259f765c3e31-utilities\") pod \"community-operators-wvvq8\" (UID: \"5de678c2-f43a-44fa-ab58-259f765c3e31\") " pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.573841 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41823bd1-3ae0-4f41-847e-d0b35047047c-catalog-content\") pod \"certified-operators-b6k7d\" (UID: \"41823bd1-3ae0-4f41-847e-d0b35047047c\") " pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.573872 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5de678c2-f43a-44fa-ab58-259f765c3e31-catalog-content\") pod \"community-operators-wvvq8\" (UID: \"5de678c2-f43a-44fa-ab58-259f765c3e31\") " pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.573935 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41823bd1-3ae0-4f41-847e-d0b35047047c-utilities\") pod \"certified-operators-b6k7d\" (UID: \"41823bd1-3ae0-4f41-847e-d0b35047047c\") " pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:16:59 crc kubenswrapper[4751]: E0130 21:16:59.574064 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:17:00.074045631 +0000 UTC m=+158.819868280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.574660 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5de678c2-f43a-44fa-ab58-259f765c3e31-utilities\") pod \"community-operators-wvvq8\" (UID: \"5de678c2-f43a-44fa-ab58-259f765c3e31\") " pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.574874 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5de678c2-f43a-44fa-ab58-259f765c3e31-catalog-content\") pod \"community-operators-wvvq8\" (UID: \"5de678c2-f43a-44fa-ab58-259f765c3e31\") " pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.600099 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh7pr\" (UniqueName: \"kubernetes.io/projected/5de678c2-f43a-44fa-ab58-259f765c3e31-kube-api-access-dh7pr\") pod \"community-operators-wvvq8\" (UID: \"5de678c2-f43a-44fa-ab58-259f765c3e31\") " pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.657859 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.675002 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41823bd1-3ae0-4f41-847e-d0b35047047c-utilities\") pod \"certified-operators-b6k7d\" (UID: \"41823bd1-3ae0-4f41-847e-d0b35047047c\") " pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.675052 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj5d8\" (UniqueName: \"kubernetes.io/projected/41823bd1-3ae0-4f41-847e-d0b35047047c-kube-api-access-nj5d8\") pod \"certified-operators-b6k7d\" (UID: \"41823bd1-3ae0-4f41-847e-d0b35047047c\") " pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.675083 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41823bd1-3ae0-4f41-847e-d0b35047047c-catalog-content\") pod \"certified-operators-b6k7d\" (UID: \"41823bd1-3ae0-4f41-847e-d0b35047047c\") " pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.675118 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:59 crc kubenswrapper[4751]: E0130 21:16:59.675381 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:17:00.175369499 +0000 UTC m=+158.921192148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.675514 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41823bd1-3ae0-4f41-847e-d0b35047047c-utilities\") pod \"certified-operators-b6k7d\" (UID: \"41823bd1-3ae0-4f41-847e-d0b35047047c\") " pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.676130 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41823bd1-3ae0-4f41-847e-d0b35047047c-catalog-content\") pod \"certified-operators-b6k7d\" (UID: \"41823bd1-3ae0-4f41-847e-d0b35047047c\") " pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.714405 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj5d8\" (UniqueName: \"kubernetes.io/projected/41823bd1-3ae0-4f41-847e-d0b35047047c-kube-api-access-nj5d8\") pod \"certified-operators-b6k7d\" (UID: \"41823bd1-3ae0-4f41-847e-d0b35047047c\") " pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.729074 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.737993 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tgvqk"] Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.738909 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.765006 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tgvqk"] Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.779177 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:59 crc kubenswrapper[4751]: E0130 21:16:59.779514 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:17:00.279499198 +0000 UTC m=+159.025321847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.832474 4751 patch_prober.go:28] interesting pod/router-default-5444994796-w42cs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:16:59 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 30 21:16:59 crc kubenswrapper[4751]: [+]process-running ok Jan 30 21:16:59 crc kubenswrapper[4751]: healthz check failed Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.832515 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w42cs" podUID="dfb29a82-8be0-4219-81b1-fecfcb4e1061" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.846173 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.857825 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" event={"ID":"ac8a7752-ba4b-41eb-a085-b493f6876beb","Type":"ContainerStarted","Data":"5bd0a32fd9ca1552f10c0ac96d767afcae7130b0de77b2177771634405e2dc75"} Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.857865 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" event={"ID":"ac8a7752-ba4b-41eb-a085-b493f6876beb","Type":"ContainerStarted","Data":"44a2bed8fef5945a849baa1e134366bbd773e20e0c67a2d3605650f83b772f73"} Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.877679 4751 generic.go:334] "Generic (PLEG): container finished" podID="cc9ed63a-23a2-4b50-a290-0409ff14fd95" containerID="e542d53fa8d38b44c5415e62c079644bf8fb944ad32fd45452254fbadf2caa51" exitCode=0 Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.878318 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" event={"ID":"cc9ed63a-23a2-4b50-a290-0409ff14fd95","Type":"ContainerDied","Data":"e542d53fa8d38b44c5415e62c079644bf8fb944ad32fd45452254fbadf2caa51"} Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.880675 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5607f892-9717-439f-a920-102a2bd3d960-utilities\") pod \"community-operators-tgvqk\" (UID: \"5607f892-9717-439f-a920-102a2bd3d960\") " pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.880713 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5607f892-9717-439f-a920-102a2bd3d960-catalog-content\") pod \"community-operators-tgvqk\" (UID: \"5607f892-9717-439f-a920-102a2bd3d960\") " pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.880731 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p25b9\" (UniqueName: \"kubernetes.io/projected/5607f892-9717-439f-a920-102a2bd3d960-kube-api-access-p25b9\") pod \"community-operators-tgvqk\" (UID: \"5607f892-9717-439f-a920-102a2bd3d960\") " pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.880778 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:59 crc kubenswrapper[4751]: E0130 21:16:59.881022 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:17:00.381011251 +0000 UTC m=+159.126833890 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.914741 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" podStartSLOduration=8.914723868 podStartE2EDuration="8.914723868s" podCreationTimestamp="2026-01-30 21:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:59.902249011 +0000 UTC m=+158.648071660" watchObservedRunningTime="2026-01-30 21:16:59.914723868 +0000 UTC m=+158.660546507" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.958919 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-54ffx"] Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.985104 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.985406 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5607f892-9717-439f-a920-102a2bd3d960-utilities\") pod \"community-operators-tgvqk\" (UID: \"5607f892-9717-439f-a920-102a2bd3d960\") " pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.985491 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5607f892-9717-439f-a920-102a2bd3d960-catalog-content\") pod \"community-operators-tgvqk\" (UID: \"5607f892-9717-439f-a920-102a2bd3d960\") " pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:16:59 crc kubenswrapper[4751]: E0130 21:16:59.985538 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:17:00.48551251 +0000 UTC m=+159.231335159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.985627 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p25b9\" (UniqueName: \"kubernetes.io/projected/5607f892-9717-439f-a920-102a2bd3d960-kube-api-access-p25b9\") pod \"community-operators-tgvqk\" (UID: \"5607f892-9717-439f-a920-102a2bd3d960\") " pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.985797 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.987152 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5607f892-9717-439f-a920-102a2bd3d960-catalog-content\") pod \"community-operators-tgvqk\" (UID: \"5607f892-9717-439f-a920-102a2bd3d960\") " pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.987357 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5607f892-9717-439f-a920-102a2bd3d960-utilities\") pod \"community-operators-tgvqk\" (UID: \"5607f892-9717-439f-a920-102a2bd3d960\") " pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:16:59 crc kubenswrapper[4751]: E0130 21:16:59.989201 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:17:00.489184043 +0000 UTC m=+159.235006812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.991451 4751 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 30 21:16:59 crc kubenswrapper[4751]: W0130 21:16:59.992838 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda59ef52d_2f47_42ac_a233_0285be317cc9.slice/crio-3de6594576878279730bf6ad7c0a39ba28b9c63e62d19e6f38aaeefbede04797 WatchSource:0}: Error finding container 3de6594576878279730bf6ad7c0a39ba28b9c63e62d19e6f38aaeefbede04797: Status 404 returned error can't find the container with id 3de6594576878279730bf6ad7c0a39ba28b9c63e62d19e6f38aaeefbede04797 Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.009517 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p25b9\" (UniqueName: \"kubernetes.io/projected/5607f892-9717-439f-a920-102a2bd3d960-kube-api-access-p25b9\") pod \"community-operators-tgvqk\" (UID: \"5607f892-9717-439f-a920-102a2bd3d960\") " pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.060294 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wvvq8"] Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.088426 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:17:00 crc kubenswrapper[4751]: E0130 21:17:00.088716 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:17:00.588702216 +0000 UTC m=+159.334524865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.118365 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.163685 4751 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-30T21:16:59.991487592Z","Handler":null,"Name":""} Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.179551 4751 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.179605 4751 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.190197 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.195974 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.196013 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.257918 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.291336 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.310152 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.342053 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tgvqk"] Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.352051 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b6k7d"] Jan 30 21:17:00 crc kubenswrapper[4751]: W0130 21:17:00.360061 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41823bd1_3ae0_4f41_847e_d0b35047047c.slice/crio-689c5d4239fa30eae8db15cc294718aa0d2dc9d4b894015fbeb7c4691e93d36c WatchSource:0}: Error finding container 689c5d4239fa30eae8db15cc294718aa0d2dc9d4b894015fbeb7c4691e93d36c: Status 404 returned error can't find the container with id 689c5d4239fa30eae8db15cc294718aa0d2dc9d4b894015fbeb7c4691e93d36c Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.398780 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.593595 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9lsr5"] Jan 30 21:17:00 crc kubenswrapper[4751]: W0130 21:17:00.602499 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73d0a80a_e569_428a_b251_33f28e06fffd.slice/crio-af1fafb4fa1bc5d4e5549e32e14665bb190720767667d7915533461f80e83d20 WatchSource:0}: Error finding container af1fafb4fa1bc5d4e5549e32e14665bb190720767667d7915533461f80e83d20: Status 404 returned error can't find the container with id af1fafb4fa1bc5d4e5549e32e14665bb190720767667d7915533461f80e83d20 Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.829434 4751 patch_prober.go:28] interesting pod/router-default-5444994796-w42cs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:17:00 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 30 21:17:00 crc kubenswrapper[4751]: [+]process-running ok Jan 30 21:17:00 crc kubenswrapper[4751]: healthz check failed Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.829503 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w42cs" podUID="dfb29a82-8be0-4219-81b1-fecfcb4e1061" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.888391 4751 generic.go:334] "Generic (PLEG): container finished" podID="a59ef52d-2f47-42ac-a233-0285be317cc9" containerID="59d0d5accbd0e741a45d2c9f0929c8ef4dea19acf5f22156ad129ee6a57b7170" exitCode=0 Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.888475 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54ffx" event={"ID":"a59ef52d-2f47-42ac-a233-0285be317cc9","Type":"ContainerDied","Data":"59d0d5accbd0e741a45d2c9f0929c8ef4dea19acf5f22156ad129ee6a57b7170"} Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.888505 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54ffx" event={"ID":"a59ef52d-2f47-42ac-a233-0285be317cc9","Type":"ContainerStarted","Data":"3de6594576878279730bf6ad7c0a39ba28b9c63e62d19e6f38aaeefbede04797"} Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.890156 4751 generic.go:334] "Generic (PLEG): container finished" podID="5de678c2-f43a-44fa-ab58-259f765c3e31" containerID="c1ef4ef3016d2aeab48af690c5e15404a48316a180fc721a730e50ce33561478" exitCode=0 Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.890217 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvvq8" event={"ID":"5de678c2-f43a-44fa-ab58-259f765c3e31","Type":"ContainerDied","Data":"c1ef4ef3016d2aeab48af690c5e15404a48316a180fc721a730e50ce33561478"} Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.890243 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvvq8" event={"ID":"5de678c2-f43a-44fa-ab58-259f765c3e31","Type":"ContainerStarted","Data":"25d69c268722a1234878b44da4db4eac47a853d184bfae913c7a2d4ea1ad28d3"} Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.892076 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.893199 4751 generic.go:334] "Generic (PLEG): container finished" podID="5607f892-9717-439f-a920-102a2bd3d960" containerID="660c0699f36cdfbc8888077f14b9b8efed6cc41a8b3dc7ca02dfbf3a83512f36" exitCode=0 Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.893318 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgvqk" event={"ID":"5607f892-9717-439f-a920-102a2bd3d960","Type":"ContainerDied","Data":"660c0699f36cdfbc8888077f14b9b8efed6cc41a8b3dc7ca02dfbf3a83512f36"} Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.893444 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgvqk" event={"ID":"5607f892-9717-439f-a920-102a2bd3d960","Type":"ContainerStarted","Data":"d742ab7f8f1e8c741124ab96be31bce53b84eb48f204dc5b3fc704a32bc25d11"} Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.900630 4751 generic.go:334] "Generic (PLEG): container finished" podID="41823bd1-3ae0-4f41-847e-d0b35047047c" containerID="b82fbc34d222439d58e4a6b54e9608b232b0b5973669b77f75ff51c83aa9b67d" exitCode=0 Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.900685 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6k7d" event={"ID":"41823bd1-3ae0-4f41-847e-d0b35047047c","Type":"ContainerDied","Data":"b82fbc34d222439d58e4a6b54e9608b232b0b5973669b77f75ff51c83aa9b67d"} Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.900705 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6k7d" event={"ID":"41823bd1-3ae0-4f41-847e-d0b35047047c","Type":"ContainerStarted","Data":"689c5d4239fa30eae8db15cc294718aa0d2dc9d4b894015fbeb7c4691e93d36c"} Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.905010 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" event={"ID":"73d0a80a-e569-428a-b251-33f28e06fffd","Type":"ContainerStarted","Data":"28cd6c03baa20199626418ec8759b36e2e4744509c9dd86f5db386c640e77f51"} Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.905059 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" event={"ID":"73d0a80a-e569-428a-b251-33f28e06fffd","Type":"ContainerStarted","Data":"af1fafb4fa1bc5d4e5549e32e14665bb190720767667d7915533461f80e83d20"} Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.993083 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" podStartSLOduration=132.993064975 podStartE2EDuration="2m12.993064975s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:17:00.990777347 +0000 UTC m=+159.736599996" watchObservedRunningTime="2026-01-30 21:17:00.993064975 +0000 UTC m=+159.738887634" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.166813 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.167424 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.171093 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.171264 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.174923 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.184825 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.205499 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5428d8a-a5ca-4889-a8b5-6fc7edf2d121-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.205558 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5428d8a-a5ca-4889-a8b5-6fc7edf2d121-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.306924 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47wdx\" (UniqueName: \"kubernetes.io/projected/cc9ed63a-23a2-4b50-a290-0409ff14fd95-kube-api-access-47wdx\") pod \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\" (UID: \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\") " Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.307003 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc9ed63a-23a2-4b50-a290-0409ff14fd95-config-volume\") pod \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\" (UID: \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\") " Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.307057 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc9ed63a-23a2-4b50-a290-0409ff14fd95-secret-volume\") pod \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\" (UID: \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\") " Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.307313 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5428d8a-a5ca-4889-a8b5-6fc7edf2d121-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.307418 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5428d8a-a5ca-4889-a8b5-6fc7edf2d121-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.307523 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5428d8a-a5ca-4889-a8b5-6fc7edf2d121-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.307909 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc9ed63a-23a2-4b50-a290-0409ff14fd95-config-volume" (OuterVolumeSpecName: "config-volume") pod "cc9ed63a-23a2-4b50-a290-0409ff14fd95" (UID: "cc9ed63a-23a2-4b50-a290-0409ff14fd95"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.317400 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6v829"] Jan 30 21:17:01 crc kubenswrapper[4751]: E0130 21:17:01.317606 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc9ed63a-23a2-4b50-a290-0409ff14fd95" containerName="collect-profiles" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.317622 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc9ed63a-23a2-4b50-a290-0409ff14fd95" containerName="collect-profiles" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.317738 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc9ed63a-23a2-4b50-a290-0409ff14fd95" containerName="collect-profiles" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.318456 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.325166 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.325196 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc9ed63a-23a2-4b50-a290-0409ff14fd95-kube-api-access-47wdx" (OuterVolumeSpecName: "kube-api-access-47wdx") pod "cc9ed63a-23a2-4b50-a290-0409ff14fd95" (UID: "cc9ed63a-23a2-4b50-a290-0409ff14fd95"). InnerVolumeSpecName "kube-api-access-47wdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.329357 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6v829"] Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.330415 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc9ed63a-23a2-4b50-a290-0409ff14fd95-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cc9ed63a-23a2-4b50-a290-0409ff14fd95" (UID: "cc9ed63a-23a2-4b50-a290-0409ff14fd95"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.336896 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5428d8a-a5ca-4889-a8b5-6fc7edf2d121-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.409052 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94e03be5-809d-49ba-9318-6222131628f5-utilities\") pod \"redhat-marketplace-6v829\" (UID: \"94e03be5-809d-49ba-9318-6222131628f5\") " pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.409123 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm6bs\" (UniqueName: \"kubernetes.io/projected/94e03be5-809d-49ba-9318-6222131628f5-kube-api-access-sm6bs\") pod \"redhat-marketplace-6v829\" (UID: \"94e03be5-809d-49ba-9318-6222131628f5\") " pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.409146 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94e03be5-809d-49ba-9318-6222131628f5-catalog-content\") pod \"redhat-marketplace-6v829\" (UID: \"94e03be5-809d-49ba-9318-6222131628f5\") " pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.409188 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47wdx\" (UniqueName: \"kubernetes.io/projected/cc9ed63a-23a2-4b50-a290-0409ff14fd95-kube-api-access-47wdx\") on node \"crc\" DevicePath \"\"" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.409199 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc9ed63a-23a2-4b50-a290-0409ff14fd95-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.409209 4751 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc9ed63a-23a2-4b50-a290-0409ff14fd95-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.493667 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.510744 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm6bs\" (UniqueName: \"kubernetes.io/projected/94e03be5-809d-49ba-9318-6222131628f5-kube-api-access-sm6bs\") pod \"redhat-marketplace-6v829\" (UID: \"94e03be5-809d-49ba-9318-6222131628f5\") " pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.510859 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94e03be5-809d-49ba-9318-6222131628f5-catalog-content\") pod \"redhat-marketplace-6v829\" (UID: \"94e03be5-809d-49ba-9318-6222131628f5\") " pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.512035 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94e03be5-809d-49ba-9318-6222131628f5-catalog-content\") pod \"redhat-marketplace-6v829\" (UID: \"94e03be5-809d-49ba-9318-6222131628f5\") " pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.512099 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94e03be5-809d-49ba-9318-6222131628f5-utilities\") pod \"redhat-marketplace-6v829\" (UID: \"94e03be5-809d-49ba-9318-6222131628f5\") " pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.512970 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94e03be5-809d-49ba-9318-6222131628f5-utilities\") pod \"redhat-marketplace-6v829\" (UID: \"94e03be5-809d-49ba-9318-6222131628f5\") " pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.540505 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm6bs\" (UniqueName: \"kubernetes.io/projected/94e03be5-809d-49ba-9318-6222131628f5-kube-api-access-sm6bs\") pod \"redhat-marketplace-6v829\" (UID: \"94e03be5-809d-49ba-9318-6222131628f5\") " pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.667279 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.721442 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8lbjc"] Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.722700 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.733692 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8lbjc"] Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.818295 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa7a824-734e-401d-b0af-ead8bb03dad5-utilities\") pod \"redhat-marketplace-8lbjc\" (UID: \"2aa7a824-734e-401d-b0af-ead8bb03dad5\") " pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.818353 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jskpc\" (UniqueName: \"kubernetes.io/projected/2aa7a824-734e-401d-b0af-ead8bb03dad5-kube-api-access-jskpc\") pod \"redhat-marketplace-8lbjc\" (UID: \"2aa7a824-734e-401d-b0af-ead8bb03dad5\") " pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.818441 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa7a824-734e-401d-b0af-ead8bb03dad5-catalog-content\") pod \"redhat-marketplace-8lbjc\" (UID: \"2aa7a824-734e-401d-b0af-ead8bb03dad5\") " pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.830976 4751 patch_prober.go:28] interesting pod/router-default-5444994796-w42cs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:17:01 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 30 21:17:01 crc kubenswrapper[4751]: [+]process-running ok Jan 30 21:17:01 crc kubenswrapper[4751]: healthz check failed Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.831018 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w42cs" podUID="dfb29a82-8be0-4219-81b1-fecfcb4e1061" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.913989 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" event={"ID":"cc9ed63a-23a2-4b50-a290-0409ff14fd95","Type":"ContainerDied","Data":"b6ac56afbe946ed8a3114588a856c9022e503d0feb1988aaa10f041f9dcbf7e4"} Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.914036 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6ac56afbe946ed8a3114588a856c9022e503d0feb1988aaa10f041f9dcbf7e4" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.914200 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.914215 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.920095 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa7a824-734e-401d-b0af-ead8bb03dad5-utilities\") pod \"redhat-marketplace-8lbjc\" (UID: \"2aa7a824-734e-401d-b0af-ead8bb03dad5\") " pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.920152 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jskpc\" (UniqueName: \"kubernetes.io/projected/2aa7a824-734e-401d-b0af-ead8bb03dad5-kube-api-access-jskpc\") pod \"redhat-marketplace-8lbjc\" (UID: \"2aa7a824-734e-401d-b0af-ead8bb03dad5\") " pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.920225 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa7a824-734e-401d-b0af-ead8bb03dad5-catalog-content\") pod \"redhat-marketplace-8lbjc\" (UID: \"2aa7a824-734e-401d-b0af-ead8bb03dad5\") " pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.921548 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa7a824-734e-401d-b0af-ead8bb03dad5-catalog-content\") pod \"redhat-marketplace-8lbjc\" (UID: \"2aa7a824-734e-401d-b0af-ead8bb03dad5\") " pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.921571 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa7a824-734e-401d-b0af-ead8bb03dad5-utilities\") pod \"redhat-marketplace-8lbjc\" (UID: \"2aa7a824-734e-401d-b0af-ead8bb03dad5\") " pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.924837 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6v829"] Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.940957 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jskpc\" (UniqueName: \"kubernetes.io/projected/2aa7a824-734e-401d-b0af-ead8bb03dad5-kube-api-access-jskpc\") pod \"redhat-marketplace-8lbjc\" (UID: \"2aa7a824-734e-401d-b0af-ead8bb03dad5\") " pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.984038 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.988287 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 21:17:02 crc kubenswrapper[4751]: W0130 21:17:02.002617 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf5428d8a_a5ca_4889_a8b5_6fc7edf2d121.slice/crio-f9337cf67ac2e9ca2b6ef7626a4fd708da54aacdace5c42c9e59aa8ae2b8a9f8 WatchSource:0}: Error finding container f9337cf67ac2e9ca2b6ef7626a4fd708da54aacdace5c42c9e59aa8ae2b8a9f8: Status 404 returned error can't find the container with id f9337cf67ac2e9ca2b6ef7626a4fd708da54aacdace5c42c9e59aa8ae2b8a9f8 Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.046721 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.321188 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zct7w"] Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.330635 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.331353 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zct7w"] Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.336145 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.427159 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97l8g\" (UniqueName: \"kubernetes.io/projected/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-kube-api-access-97l8g\") pod \"redhat-operators-zct7w\" (UID: \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\") " pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.427220 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-utilities\") pod \"redhat-operators-zct7w\" (UID: \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\") " pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.427240 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-catalog-content\") pod \"redhat-operators-zct7w\" (UID: \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\") " pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.477719 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8lbjc"] Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.528186 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-utilities\") pod \"redhat-operators-zct7w\" (UID: \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\") " pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.528240 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-catalog-content\") pod \"redhat-operators-zct7w\" (UID: \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\") " pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.528346 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97l8g\" (UniqueName: \"kubernetes.io/projected/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-kube-api-access-97l8g\") pod \"redhat-operators-zct7w\" (UID: \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\") " pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.529043 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-utilities\") pod \"redhat-operators-zct7w\" (UID: \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\") " pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.529349 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-catalog-content\") pod \"redhat-operators-zct7w\" (UID: \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\") " pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.547622 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97l8g\" (UniqueName: \"kubernetes.io/projected/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-kube-api-access-97l8g\") pod \"redhat-operators-zct7w\" (UID: \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\") " pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.670761 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.715278 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fvkc4"] Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.716541 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.721703 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fvkc4"] Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.832112 4751 patch_prober.go:28] interesting pod/router-default-5444994796-w42cs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:17:02 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 30 21:17:02 crc kubenswrapper[4751]: [+]process-running ok Jan 30 21:17:02 crc kubenswrapper[4751]: healthz check failed Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.832278 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ksmq\" (UniqueName: \"kubernetes.io/projected/80287af8-6129-4973-8442-887fa4b3ee9f-kube-api-access-5ksmq\") pod \"redhat-operators-fvkc4\" (UID: \"80287af8-6129-4973-8442-887fa4b3ee9f\") " pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.832350 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80287af8-6129-4973-8442-887fa4b3ee9f-catalog-content\") pod \"redhat-operators-fvkc4\" (UID: \"80287af8-6129-4973-8442-887fa4b3ee9f\") " pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.832403 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80287af8-6129-4973-8442-887fa4b3ee9f-utilities\") pod \"redhat-operators-fvkc4\" (UID: \"80287af8-6129-4973-8442-887fa4b3ee9f\") " pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.832509 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w42cs" podUID="dfb29a82-8be0-4219-81b1-fecfcb4e1061" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.934877 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80287af8-6129-4973-8442-887fa4b3ee9f-utilities\") pod \"redhat-operators-fvkc4\" (UID: \"80287af8-6129-4973-8442-887fa4b3ee9f\") " pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.934972 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ksmq\" (UniqueName: \"kubernetes.io/projected/80287af8-6129-4973-8442-887fa4b3ee9f-kube-api-access-5ksmq\") pod \"redhat-operators-fvkc4\" (UID: \"80287af8-6129-4973-8442-887fa4b3ee9f\") " pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.935012 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80287af8-6129-4973-8442-887fa4b3ee9f-catalog-content\") pod \"redhat-operators-fvkc4\" (UID: \"80287af8-6129-4973-8442-887fa4b3ee9f\") " pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.935541 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80287af8-6129-4973-8442-887fa4b3ee9f-utilities\") pod \"redhat-operators-fvkc4\" (UID: \"80287af8-6129-4973-8442-887fa4b3ee9f\") " pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.935642 4751 generic.go:334] "Generic (PLEG): container finished" podID="94e03be5-809d-49ba-9318-6222131628f5" containerID="2e2bfeb85bee453c5562087b8952080d63fb468a903b18dd2f8152c589c7b24e" exitCode=0 Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.935702 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6v829" event={"ID":"94e03be5-809d-49ba-9318-6222131628f5","Type":"ContainerDied","Data":"2e2bfeb85bee453c5562087b8952080d63fb468a903b18dd2f8152c589c7b24e"} Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.935726 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6v829" event={"ID":"94e03be5-809d-49ba-9318-6222131628f5","Type":"ContainerStarted","Data":"960e022b4f8bb566d2fdbe8e623c147ebba25b0f4a883e6013345ce05433bda9"} Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.938744 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80287af8-6129-4973-8442-887fa4b3ee9f-catalog-content\") pod \"redhat-operators-fvkc4\" (UID: \"80287af8-6129-4973-8442-887fa4b3ee9f\") " pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.953763 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121","Type":"ContainerStarted","Data":"e519758b7271807d6d89766f5400397363b935aa0b64ac3537487245ff75d044"} Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.953815 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121","Type":"ContainerStarted","Data":"f9337cf67ac2e9ca2b6ef7626a4fd708da54aacdace5c42c9e59aa8ae2b8a9f8"} Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.968490 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ksmq\" (UniqueName: \"kubernetes.io/projected/80287af8-6129-4973-8442-887fa4b3ee9f-kube-api-access-5ksmq\") pod \"redhat-operators-fvkc4\" (UID: \"80287af8-6129-4973-8442-887fa4b3ee9f\") " pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.970783 4751 generic.go:334] "Generic (PLEG): container finished" podID="2aa7a824-734e-401d-b0af-ead8bb03dad5" containerID="01546679b55fd82a5346039e7e8bf30c9a6fe860dba2c776bd0984b001c41248" exitCode=0 Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.971919 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lbjc" event={"ID":"2aa7a824-734e-401d-b0af-ead8bb03dad5","Type":"ContainerDied","Data":"01546679b55fd82a5346039e7e8bf30c9a6fe860dba2c776bd0984b001c41248"} Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.971962 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lbjc" event={"ID":"2aa7a824-734e-401d-b0af-ead8bb03dad5","Type":"ContainerStarted","Data":"24f317d1701097d9103031354b6663adbe17eff186ff15234f4ba88c7fab3126"} Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.987025 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.9870065270000001 podStartE2EDuration="1.987006527s" podCreationTimestamp="2026-01-30 21:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:17:02.982658477 +0000 UTC m=+161.728481126" watchObservedRunningTime="2026-01-30 21:17:02.987006527 +0000 UTC m=+161.732829166" Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.060205 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.200355 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zct7w"] Jan 30 21:17:03 crc kubenswrapper[4751]: W0130 21:17:03.209264 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb05ec0ea_cf7e_46ce_9814_a4597ebcf238.slice/crio-804ecfb30bc123f3020417772e2716aa7215e9f0bbcc895b3845fd67eade69b4 WatchSource:0}: Error finding container 804ecfb30bc123f3020417772e2716aa7215e9f0bbcc895b3845fd67eade69b4: Status 404 returned error can't find the container with id 804ecfb30bc123f3020417772e2716aa7215e9f0bbcc895b3845fd67eade69b4 Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.259757 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.266728 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.289712 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fvkc4"] Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.408723 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.409014 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.413187 4751 patch_prober.go:28] interesting pod/console-f9d7485db-7bw65 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.416466 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-7bw65" podUID="07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.417723 4751 patch_prober.go:28] interesting pod/downloads-7954f5f757-8l2v5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.417756 4751 patch_prober.go:28] interesting pod/downloads-7954f5f757-8l2v5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.417785 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8l2v5" podUID="21dc9dc0-702d-49a7-baed-f8e70f6867f3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.417781 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-8l2v5" podUID="21dc9dc0-702d-49a7-baed-f8e70f6867f3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.826800 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.829596 4751 patch_prober.go:28] interesting pod/router-default-5444994796-w42cs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:17:03 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 30 21:17:03 crc kubenswrapper[4751]: [+]process-running ok Jan 30 21:17:03 crc kubenswrapper[4751]: healthz check failed Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.829647 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w42cs" podUID="dfb29a82-8be0-4219-81b1-fecfcb4e1061" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.980776 4751 generic.go:334] "Generic (PLEG): container finished" podID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" containerID="3559a530c75f8ff68a5cad975d02e6fac3ea4f198ad887abab6fb34ab0d38642" exitCode=0 Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.983008 4751 generic.go:334] "Generic (PLEG): container finished" podID="80287af8-6129-4973-8442-887fa4b3ee9f" containerID="4f7f32ebba510377188fdb9f775c5bdc1a0070f2a59bec9d0e32afa0fdd36c30" exitCode=0 Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.989374 4751 generic.go:334] "Generic (PLEG): container finished" podID="f5428d8a-a5ca-4889-a8b5-6fc7edf2d121" containerID="e519758b7271807d6d89766f5400397363b935aa0b64ac3537487245ff75d044" exitCode=0 Jan 30 21:17:04 crc kubenswrapper[4751]: I0130 21:17:04.011174 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zct7w" event={"ID":"b05ec0ea-cf7e-46ce-9814-a4597ebcf238","Type":"ContainerDied","Data":"3559a530c75f8ff68a5cad975d02e6fac3ea4f198ad887abab6fb34ab0d38642"} Jan 30 21:17:04 crc kubenswrapper[4751]: I0130 21:17:04.011210 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zct7w" event={"ID":"b05ec0ea-cf7e-46ce-9814-a4597ebcf238","Type":"ContainerStarted","Data":"804ecfb30bc123f3020417772e2716aa7215e9f0bbcc895b3845fd67eade69b4"} Jan 30 21:17:04 crc kubenswrapper[4751]: I0130 21:17:04.011238 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvkc4" event={"ID":"80287af8-6129-4973-8442-887fa4b3ee9f","Type":"ContainerDied","Data":"4f7f32ebba510377188fdb9f775c5bdc1a0070f2a59bec9d0e32afa0fdd36c30"} Jan 30 21:17:04 crc kubenswrapper[4751]: I0130 21:17:04.011248 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvkc4" event={"ID":"80287af8-6129-4973-8442-887fa4b3ee9f","Type":"ContainerStarted","Data":"dc6fc5c63903f1bd0c4e0a90425019daa79c25f9ce21c6dcff83a787794afb40"} Jan 30 21:17:04 crc kubenswrapper[4751]: I0130 21:17:04.011257 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121","Type":"ContainerDied","Data":"e519758b7271807d6d89766f5400397363b935aa0b64ac3537487245ff75d044"} Jan 30 21:17:04 crc kubenswrapper[4751]: I0130 21:17:04.828254 4751 patch_prober.go:28] interesting pod/router-default-5444994796-w42cs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:17:04 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 30 21:17:04 crc kubenswrapper[4751]: [+]process-running ok Jan 30 21:17:04 crc kubenswrapper[4751]: healthz check failed Jan 30 21:17:04 crc kubenswrapper[4751]: I0130 21:17:04.828310 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w42cs" podUID="dfb29a82-8be0-4219-81b1-fecfcb4e1061" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:17:05 crc kubenswrapper[4751]: I0130 21:17:05.283016 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:17:05 crc kubenswrapper[4751]: I0130 21:17:05.400601 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5428d8a-a5ca-4889-a8b5-6fc7edf2d121-kubelet-dir\") pod \"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121\" (UID: \"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121\") " Jan 30 21:17:05 crc kubenswrapper[4751]: I0130 21:17:05.400700 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5428d8a-a5ca-4889-a8b5-6fc7edf2d121-kube-api-access\") pod \"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121\" (UID: \"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121\") " Jan 30 21:17:05 crc kubenswrapper[4751]: I0130 21:17:05.400718 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5428d8a-a5ca-4889-a8b5-6fc7edf2d121-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f5428d8a-a5ca-4889-a8b5-6fc7edf2d121" (UID: "f5428d8a-a5ca-4889-a8b5-6fc7edf2d121"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:17:05 crc kubenswrapper[4751]: I0130 21:17:05.400955 4751 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5428d8a-a5ca-4889-a8b5-6fc7edf2d121-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:17:05 crc kubenswrapper[4751]: I0130 21:17:05.407725 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5428d8a-a5ca-4889-a8b5-6fc7edf2d121-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f5428d8a-a5ca-4889-a8b5-6fc7edf2d121" (UID: "f5428d8a-a5ca-4889-a8b5-6fc7edf2d121"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:17:05 crc kubenswrapper[4751]: I0130 21:17:05.502926 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5428d8a-a5ca-4889-a8b5-6fc7edf2d121-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:17:05 crc kubenswrapper[4751]: I0130 21:17:05.828919 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:17:05 crc kubenswrapper[4751]: I0130 21:17:05.830828 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:17:05 crc kubenswrapper[4751]: I0130 21:17:05.929418 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.005515 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.013595 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121","Type":"ContainerDied","Data":"f9337cf67ac2e9ca2b6ef7626a4fd708da54aacdace5c42c9e59aa8ae2b8a9f8"} Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.013629 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9337cf67ac2e9ca2b6ef7626a4fd708da54aacdace5c42c9e59aa8ae2b8a9f8" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.297197 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-hrfwj" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.647595 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 21:17:06 crc kubenswrapper[4751]: E0130 21:17:06.648023 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5428d8a-a5ca-4889-a8b5-6fc7edf2d121" containerName="pruner" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.648034 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5428d8a-a5ca-4889-a8b5-6fc7edf2d121" containerName="pruner" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.648120 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5428d8a-a5ca-4889-a8b5-6fc7edf2d121" containerName="pruner" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.648435 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.650298 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.650368 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.657536 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.732212 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/062f5c13-ba50-4901-b4a1-92a8dce64389-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"062f5c13-ba50-4901-b4a1-92a8dce64389\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.732260 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/062f5c13-ba50-4901-b4a1-92a8dce64389-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"062f5c13-ba50-4901-b4a1-92a8dce64389\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.833442 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/062f5c13-ba50-4901-b4a1-92a8dce64389-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"062f5c13-ba50-4901-b4a1-92a8dce64389\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.833552 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/062f5c13-ba50-4901-b4a1-92a8dce64389-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"062f5c13-ba50-4901-b4a1-92a8dce64389\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.833559 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/062f5c13-ba50-4901-b4a1-92a8dce64389-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"062f5c13-ba50-4901-b4a1-92a8dce64389\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.859221 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/062f5c13-ba50-4901-b4a1-92a8dce64389-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"062f5c13-ba50-4901-b4a1-92a8dce64389\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.968787 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:17:11 crc kubenswrapper[4751]: I0130 21:17:11.099438 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:17:11 crc kubenswrapper[4751]: I0130 21:17:11.105563 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:17:11 crc kubenswrapper[4751]: I0130 21:17:11.199156 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:17:13 crc kubenswrapper[4751]: I0130 21:17:13.413639 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:17:13 crc kubenswrapper[4751]: I0130 21:17:13.421471 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:17:13 crc kubenswrapper[4751]: I0130 21:17:13.430754 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-8l2v5" Jan 30 21:17:20 crc kubenswrapper[4751]: I0130 21:17:20.131269 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:17:20 crc kubenswrapper[4751]: I0130 21:17:20.407903 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:17:24 crc kubenswrapper[4751]: I0130 21:17:24.126711 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:17:24 crc kubenswrapper[4751]: I0130 21:17:24.127247 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:17:32 crc kubenswrapper[4751]: E0130 21:17:32.190088 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 21:17:32 crc kubenswrapper[4751]: E0130 21:17:32.190685 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p25b9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-tgvqk_openshift-marketplace(5607f892-9717-439f-a920-102a2bd3d960): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:17:32 crc kubenswrapper[4751]: E0130 21:17:32.191778 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-tgvqk" podUID="5607f892-9717-439f-a920-102a2bd3d960" Jan 30 21:17:32 crc kubenswrapper[4751]: E0130 21:17:32.234290 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 21:17:32 crc kubenswrapper[4751]: E0130 21:17:32.234635 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jskpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8lbjc_openshift-marketplace(2aa7a824-734e-401d-b0af-ead8bb03dad5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:17:32 crc kubenswrapper[4751]: E0130 21:17:32.236176 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-8lbjc" podUID="2aa7a824-734e-401d-b0af-ead8bb03dad5" Jan 30 21:17:33 crc kubenswrapper[4751]: E0130 21:17:33.756073 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-tgvqk" podUID="5607f892-9717-439f-a920-102a2bd3d960" Jan 30 21:17:33 crc kubenswrapper[4751]: E0130 21:17:33.756118 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8lbjc" podUID="2aa7a824-734e-401d-b0af-ead8bb03dad5" Jan 30 21:17:33 crc kubenswrapper[4751]: E0130 21:17:33.846262 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 21:17:33 crc kubenswrapper[4751]: E0130 21:17:33.847917 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dh7pr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-wvvq8_openshift-marketplace(5de678c2-f43a-44fa-ab58-259f765c3e31): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:17:33 crc kubenswrapper[4751]: E0130 21:17:33.849494 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-wvvq8" podUID="5de678c2-f43a-44fa-ab58-259f765c3e31" Jan 30 21:17:33 crc kubenswrapper[4751]: E0130 21:17:33.876033 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 21:17:33 crc kubenswrapper[4751]: E0130 21:17:33.876170 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tng6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-54ffx_openshift-marketplace(a59ef52d-2f47-42ac-a233-0285be317cc9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:17:33 crc kubenswrapper[4751]: E0130 21:17:33.877372 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-54ffx" podUID="a59ef52d-2f47-42ac-a233-0285be317cc9" Jan 30 21:17:33 crc kubenswrapper[4751]: E0130 21:17:33.898208 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 21:17:33 crc kubenswrapper[4751]: E0130 21:17:33.898710 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sm6bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-6v829_openshift-marketplace(94e03be5-809d-49ba-9318-6222131628f5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:17:33 crc kubenswrapper[4751]: E0130 21:17:33.900084 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-6v829" podUID="94e03be5-809d-49ba-9318-6222131628f5" Jan 30 21:17:34 crc kubenswrapper[4751]: I0130 21:17:34.252377 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" Jan 30 21:17:37 crc kubenswrapper[4751]: E0130 21:17:37.331634 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-6v829" podUID="94e03be5-809d-49ba-9318-6222131628f5" Jan 30 21:17:37 crc kubenswrapper[4751]: E0130 21:17:37.331678 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-wvvq8" podUID="5de678c2-f43a-44fa-ab58-259f765c3e31" Jan 30 21:17:37 crc kubenswrapper[4751]: E0130 21:17:37.333424 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-54ffx" podUID="a59ef52d-2f47-42ac-a233-0285be317cc9" Jan 30 21:17:37 crc kubenswrapper[4751]: E0130 21:17:37.371586 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 21:17:37 crc kubenswrapper[4751]: E0130 21:17:37.371751 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ksmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-fvkc4_openshift-marketplace(80287af8-6129-4973-8442-887fa4b3ee9f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:17:37 crc kubenswrapper[4751]: E0130 21:17:37.373117 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-fvkc4" podUID="80287af8-6129-4973-8442-887fa4b3ee9f" Jan 30 21:17:37 crc kubenswrapper[4751]: E0130 21:17:37.403514 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 21:17:37 crc kubenswrapper[4751]: E0130 21:17:37.403661 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-97l8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zct7w_openshift-marketplace(b05ec0ea-cf7e-46ce-9814-a4597ebcf238): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:17:37 crc kubenswrapper[4751]: E0130 21:17:37.405369 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zct7w" podUID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" Jan 30 21:17:37 crc kubenswrapper[4751]: I0130 21:17:37.524394 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-c477w"] Jan 30 21:17:37 crc kubenswrapper[4751]: W0130 21:17:37.530520 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c30a687_0b58_4a63_b9e3_3a3624676358.slice/crio-d3e621d75ce79ffaee93ba12f9e803df2fcc545f6ada1dc30ac5fdc0ee406f5f WatchSource:0}: Error finding container d3e621d75ce79ffaee93ba12f9e803df2fcc545f6ada1dc30ac5fdc0ee406f5f: Status 404 returned error can't find the container with id d3e621d75ce79ffaee93ba12f9e803df2fcc545f6ada1dc30ac5fdc0ee406f5f Jan 30 21:17:37 crc kubenswrapper[4751]: I0130 21:17:37.584145 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 21:17:38 crc kubenswrapper[4751]: I0130 21:17:38.205396 4751 generic.go:334] "Generic (PLEG): container finished" podID="41823bd1-3ae0-4f41-847e-d0b35047047c" containerID="49530e2d96dab78725c3bbb47b94300babc691cfe0e97af29fdd989cb06346eb" exitCode=0 Jan 30 21:17:38 crc kubenswrapper[4751]: I0130 21:17:38.205836 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6k7d" event={"ID":"41823bd1-3ae0-4f41-847e-d0b35047047c","Type":"ContainerDied","Data":"49530e2d96dab78725c3bbb47b94300babc691cfe0e97af29fdd989cb06346eb"} Jan 30 21:17:38 crc kubenswrapper[4751]: I0130 21:17:38.209772 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-c477w" event={"ID":"3c30a687-0b58-4a63-b9e3-3a3624676358","Type":"ContainerStarted","Data":"619e3e4731fc1aee78a7e7b7e9b131442f16bff59999c94297828e6cc2a19c4e"} Jan 30 21:17:38 crc kubenswrapper[4751]: I0130 21:17:38.209791 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-c477w" event={"ID":"3c30a687-0b58-4a63-b9e3-3a3624676358","Type":"ContainerStarted","Data":"036c71febd360410e089e5d20d16b6a20c09c8db293dc1e93730cac18b201cfc"} Jan 30 21:17:38 crc kubenswrapper[4751]: I0130 21:17:38.209800 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-c477w" event={"ID":"3c30a687-0b58-4a63-b9e3-3a3624676358","Type":"ContainerStarted","Data":"d3e621d75ce79ffaee93ba12f9e803df2fcc545f6ada1dc30ac5fdc0ee406f5f"} Jan 30 21:17:38 crc kubenswrapper[4751]: I0130 21:17:38.212789 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"062f5c13-ba50-4901-b4a1-92a8dce64389","Type":"ContainerStarted","Data":"d87927a76626e017e149cd3548630cdcac05f9e0d61f134b1062a5375f5c4ae4"} Jan 30 21:17:38 crc kubenswrapper[4751]: I0130 21:17:38.212814 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"062f5c13-ba50-4901-b4a1-92a8dce64389","Type":"ContainerStarted","Data":"7e2659e63802921d7c9a8cefe03242547d2ee43437129c21da56e86b4dd9ee5c"} Jan 30 21:17:38 crc kubenswrapper[4751]: E0130 21:17:38.217519 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zct7w" podUID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" Jan 30 21:17:38 crc kubenswrapper[4751]: E0130 21:17:38.217687 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-fvkc4" podUID="80287af8-6129-4973-8442-887fa4b3ee9f" Jan 30 21:17:38 crc kubenswrapper[4751]: I0130 21:17:38.282837 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=32.282820549 podStartE2EDuration="32.282820549s" podCreationTimestamp="2026-01-30 21:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:17:38.277250208 +0000 UTC m=+197.023072857" watchObservedRunningTime="2026-01-30 21:17:38.282820549 +0000 UTC m=+197.028643198" Jan 30 21:17:38 crc kubenswrapper[4751]: I0130 21:17:38.329407 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-c477w" podStartSLOduration=170.329388934 podStartE2EDuration="2m50.329388934s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:17:38.327533667 +0000 UTC m=+197.073356336" watchObservedRunningTime="2026-01-30 21:17:38.329388934 +0000 UTC m=+197.075211603" Jan 30 21:17:39 crc kubenswrapper[4751]: I0130 21:17:39.218991 4751 generic.go:334] "Generic (PLEG): container finished" podID="062f5c13-ba50-4901-b4a1-92a8dce64389" containerID="d87927a76626e017e149cd3548630cdcac05f9e0d61f134b1062a5375f5c4ae4" exitCode=0 Jan 30 21:17:39 crc kubenswrapper[4751]: I0130 21:17:39.219197 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"062f5c13-ba50-4901-b4a1-92a8dce64389","Type":"ContainerDied","Data":"d87927a76626e017e149cd3548630cdcac05f9e0d61f134b1062a5375f5c4ae4"} Jan 30 21:17:39 crc kubenswrapper[4751]: I0130 21:17:39.223213 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6k7d" event={"ID":"41823bd1-3ae0-4f41-847e-d0b35047047c","Type":"ContainerStarted","Data":"c8f63962edb37c7a7dfff9c1266ee610ce21ae1ce8663a8c5a1c0d445db5b1bc"} Jan 30 21:17:39 crc kubenswrapper[4751]: I0130 21:17:39.262827 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b6k7d" podStartSLOduration=2.383626132 podStartE2EDuration="40.262811912s" podCreationTimestamp="2026-01-30 21:16:59 +0000 UTC" firstStartedPulling="2026-01-30 21:17:00.904552343 +0000 UTC m=+159.650375002" lastFinishedPulling="2026-01-30 21:17:38.783738103 +0000 UTC m=+197.529560782" observedRunningTime="2026-01-30 21:17:39.261289464 +0000 UTC m=+198.007112113" watchObservedRunningTime="2026-01-30 21:17:39.262811912 +0000 UTC m=+198.008634551" Jan 30 21:17:39 crc kubenswrapper[4751]: I0130 21:17:39.846725 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:17:39 crc kubenswrapper[4751]: I0130 21:17:39.846793 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:17:40 crc kubenswrapper[4751]: I0130 21:17:40.445243 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:17:40 crc kubenswrapper[4751]: I0130 21:17:40.549065 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/062f5c13-ba50-4901-b4a1-92a8dce64389-kubelet-dir\") pod \"062f5c13-ba50-4901-b4a1-92a8dce64389\" (UID: \"062f5c13-ba50-4901-b4a1-92a8dce64389\") " Jan 30 21:17:40 crc kubenswrapper[4751]: I0130 21:17:40.549144 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/062f5c13-ba50-4901-b4a1-92a8dce64389-kube-api-access\") pod \"062f5c13-ba50-4901-b4a1-92a8dce64389\" (UID: \"062f5c13-ba50-4901-b4a1-92a8dce64389\") " Jan 30 21:17:40 crc kubenswrapper[4751]: I0130 21:17:40.549245 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/062f5c13-ba50-4901-b4a1-92a8dce64389-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "062f5c13-ba50-4901-b4a1-92a8dce64389" (UID: "062f5c13-ba50-4901-b4a1-92a8dce64389"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:17:40 crc kubenswrapper[4751]: I0130 21:17:40.549436 4751 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/062f5c13-ba50-4901-b4a1-92a8dce64389-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:17:40 crc kubenswrapper[4751]: I0130 21:17:40.555208 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/062f5c13-ba50-4901-b4a1-92a8dce64389-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "062f5c13-ba50-4901-b4a1-92a8dce64389" (UID: "062f5c13-ba50-4901-b4a1-92a8dce64389"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:17:40 crc kubenswrapper[4751]: I0130 21:17:40.650350 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/062f5c13-ba50-4901-b4a1-92a8dce64389-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:17:40 crc kubenswrapper[4751]: I0130 21:17:40.983522 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-b6k7d" podUID="41823bd1-3ae0-4f41-847e-d0b35047047c" containerName="registry-server" probeResult="failure" output=< Jan 30 21:17:40 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:17:40 crc kubenswrapper[4751]: > Jan 30 21:17:41 crc kubenswrapper[4751]: I0130 21:17:41.233572 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"062f5c13-ba50-4901-b4a1-92a8dce64389","Type":"ContainerDied","Data":"7e2659e63802921d7c9a8cefe03242547d2ee43437129c21da56e86b4dd9ee5c"} Jan 30 21:17:41 crc kubenswrapper[4751]: I0130 21:17:41.234120 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e2659e63802921d7c9a8cefe03242547d2ee43437129c21da56e86b4dd9ee5c" Jan 30 21:17:41 crc kubenswrapper[4751]: I0130 21:17:41.233593 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:17:44 crc kubenswrapper[4751]: I0130 21:17:44.849078 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 21:17:44 crc kubenswrapper[4751]: E0130 21:17:44.849993 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="062f5c13-ba50-4901-b4a1-92a8dce64389" containerName="pruner" Jan 30 21:17:44 crc kubenswrapper[4751]: I0130 21:17:44.850020 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="062f5c13-ba50-4901-b4a1-92a8dce64389" containerName="pruner" Jan 30 21:17:44 crc kubenswrapper[4751]: I0130 21:17:44.850282 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="062f5c13-ba50-4901-b4a1-92a8dce64389" containerName="pruner" Jan 30 21:17:44 crc kubenswrapper[4751]: I0130 21:17:44.851146 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:17:44 crc kubenswrapper[4751]: I0130 21:17:44.854690 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 21:17:44 crc kubenswrapper[4751]: I0130 21:17:44.854812 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 21:17:44 crc kubenswrapper[4751]: I0130 21:17:44.858489 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 21:17:45 crc kubenswrapper[4751]: I0130 21:17:45.008540 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2910afc2-0fe9-492b-8dcf-ddab577f7685-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2910afc2-0fe9-492b-8dcf-ddab577f7685\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:17:45 crc kubenswrapper[4751]: I0130 21:17:45.008796 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2910afc2-0fe9-492b-8dcf-ddab577f7685-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2910afc2-0fe9-492b-8dcf-ddab577f7685\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:17:45 crc kubenswrapper[4751]: I0130 21:17:45.109708 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2910afc2-0fe9-492b-8dcf-ddab577f7685-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2910afc2-0fe9-492b-8dcf-ddab577f7685\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:17:45 crc kubenswrapper[4751]: I0130 21:17:45.109799 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2910afc2-0fe9-492b-8dcf-ddab577f7685-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2910afc2-0fe9-492b-8dcf-ddab577f7685\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:17:45 crc kubenswrapper[4751]: I0130 21:17:45.110231 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2910afc2-0fe9-492b-8dcf-ddab577f7685-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2910afc2-0fe9-492b-8dcf-ddab577f7685\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:17:45 crc kubenswrapper[4751]: I0130 21:17:45.151569 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2910afc2-0fe9-492b-8dcf-ddab577f7685-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2910afc2-0fe9-492b-8dcf-ddab577f7685\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:17:45 crc kubenswrapper[4751]: I0130 21:17:45.171146 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:17:45 crc kubenswrapper[4751]: I0130 21:17:45.607453 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 21:17:46 crc kubenswrapper[4751]: I0130 21:17:46.264867 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2910afc2-0fe9-492b-8dcf-ddab577f7685","Type":"ContainerStarted","Data":"6e554bd68c7601b22fb2c7b3a7e062ba07539d9cb0c6177c7b4edf94cb637484"} Jan 30 21:17:46 crc kubenswrapper[4751]: I0130 21:17:46.265133 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2910afc2-0fe9-492b-8dcf-ddab577f7685","Type":"ContainerStarted","Data":"a47739a235fe95d1cbc19228a27614e1f260c6154881c9efaae1076098f63274"} Jan 30 21:17:46 crc kubenswrapper[4751]: I0130 21:17:46.278948 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.278929142 podStartE2EDuration="2.278929142s" podCreationTimestamp="2026-01-30 21:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:17:46.275317844 +0000 UTC m=+205.021140533" watchObservedRunningTime="2026-01-30 21:17:46.278929142 +0000 UTC m=+205.024751791" Jan 30 21:17:47 crc kubenswrapper[4751]: I0130 21:17:47.270125 4751 generic.go:334] "Generic (PLEG): container finished" podID="2910afc2-0fe9-492b-8dcf-ddab577f7685" containerID="6e554bd68c7601b22fb2c7b3a7e062ba07539d9cb0c6177c7b4edf94cb637484" exitCode=0 Jan 30 21:17:47 crc kubenswrapper[4751]: I0130 21:17:47.270163 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2910afc2-0fe9-492b-8dcf-ddab577f7685","Type":"ContainerDied","Data":"6e554bd68c7601b22fb2c7b3a7e062ba07539d9cb0c6177c7b4edf94cb637484"} Jan 30 21:17:48 crc kubenswrapper[4751]: I0130 21:17:48.517791 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:17:48 crc kubenswrapper[4751]: I0130 21:17:48.656834 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2910afc2-0fe9-492b-8dcf-ddab577f7685-kube-api-access\") pod \"2910afc2-0fe9-492b-8dcf-ddab577f7685\" (UID: \"2910afc2-0fe9-492b-8dcf-ddab577f7685\") " Jan 30 21:17:48 crc kubenswrapper[4751]: I0130 21:17:48.656900 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2910afc2-0fe9-492b-8dcf-ddab577f7685-kubelet-dir\") pod \"2910afc2-0fe9-492b-8dcf-ddab577f7685\" (UID: \"2910afc2-0fe9-492b-8dcf-ddab577f7685\") " Jan 30 21:17:48 crc kubenswrapper[4751]: I0130 21:17:48.656948 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2910afc2-0fe9-492b-8dcf-ddab577f7685-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2910afc2-0fe9-492b-8dcf-ddab577f7685" (UID: "2910afc2-0fe9-492b-8dcf-ddab577f7685"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:17:48 crc kubenswrapper[4751]: I0130 21:17:48.657149 4751 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2910afc2-0fe9-492b-8dcf-ddab577f7685-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:17:48 crc kubenswrapper[4751]: I0130 21:17:48.665360 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2910afc2-0fe9-492b-8dcf-ddab577f7685-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2910afc2-0fe9-492b-8dcf-ddab577f7685" (UID: "2910afc2-0fe9-492b-8dcf-ddab577f7685"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:17:48 crc kubenswrapper[4751]: I0130 21:17:48.758710 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2910afc2-0fe9-492b-8dcf-ddab577f7685-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:17:49 crc kubenswrapper[4751]: I0130 21:17:49.285348 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgvqk" event={"ID":"5607f892-9717-439f-a920-102a2bd3d960","Type":"ContainerStarted","Data":"e0c09ac548892d16ce214d98d72512bd48e15448460ce8ae35e4043474ce58cc"} Jan 30 21:17:49 crc kubenswrapper[4751]: I0130 21:17:49.286627 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2910afc2-0fe9-492b-8dcf-ddab577f7685","Type":"ContainerDied","Data":"a47739a235fe95d1cbc19228a27614e1f260c6154881c9efaae1076098f63274"} Jan 30 21:17:49 crc kubenswrapper[4751]: I0130 21:17:49.286663 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a47739a235fe95d1cbc19228a27614e1f260c6154881c9efaae1076098f63274" Jan 30 21:17:49 crc kubenswrapper[4751]: I0130 21:17:49.286712 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:17:49 crc kubenswrapper[4751]: I0130 21:17:49.912934 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:17:49 crc kubenswrapper[4751]: I0130 21:17:49.964304 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:17:50 crc kubenswrapper[4751]: I0130 21:17:50.297751 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54ffx" event={"ID":"a59ef52d-2f47-42ac-a233-0285be317cc9","Type":"ContainerStarted","Data":"1ce35f5828f7898b6d403ab0ee2c2611372e65524c25b1e093fe6f0ff286146b"} Jan 30 21:17:50 crc kubenswrapper[4751]: I0130 21:17:50.305365 4751 generic.go:334] "Generic (PLEG): container finished" podID="94e03be5-809d-49ba-9318-6222131628f5" containerID="cc10611800a185029b7504531f1aef239f78d89a7c893703ac97a90606882855" exitCode=0 Jan 30 21:17:50 crc kubenswrapper[4751]: I0130 21:17:50.305568 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6v829" event={"ID":"94e03be5-809d-49ba-9318-6222131628f5","Type":"ContainerDied","Data":"cc10611800a185029b7504531f1aef239f78d89a7c893703ac97a90606882855"} Jan 30 21:17:50 crc kubenswrapper[4751]: I0130 21:17:50.312420 4751 generic.go:334] "Generic (PLEG): container finished" podID="5607f892-9717-439f-a920-102a2bd3d960" containerID="e0c09ac548892d16ce214d98d72512bd48e15448460ce8ae35e4043474ce58cc" exitCode=0 Jan 30 21:17:50 crc kubenswrapper[4751]: I0130 21:17:50.312482 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgvqk" event={"ID":"5607f892-9717-439f-a920-102a2bd3d960","Type":"ContainerDied","Data":"e0c09ac548892d16ce214d98d72512bd48e15448460ce8ae35e4043474ce58cc"} Jan 30 21:17:50 crc kubenswrapper[4751]: I0130 21:17:50.314506 4751 generic.go:334] "Generic (PLEG): container finished" podID="2aa7a824-734e-401d-b0af-ead8bb03dad5" containerID="b07ef308640fd17ca101597385790cdc7d8a83b7a8df7bce4290518e0c697c43" exitCode=0 Jan 30 21:17:50 crc kubenswrapper[4751]: I0130 21:17:50.314572 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lbjc" event={"ID":"2aa7a824-734e-401d-b0af-ead8bb03dad5","Type":"ContainerDied","Data":"b07ef308640fd17ca101597385790cdc7d8a83b7a8df7bce4290518e0c697c43"} Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.009145 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b6k7d"] Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.320682 4751 generic.go:334] "Generic (PLEG): container finished" podID="a59ef52d-2f47-42ac-a233-0285be317cc9" containerID="1ce35f5828f7898b6d403ab0ee2c2611372e65524c25b1e093fe6f0ff286146b" exitCode=0 Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.320968 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54ffx" event={"ID":"a59ef52d-2f47-42ac-a233-0285be317cc9","Type":"ContainerDied","Data":"1ce35f5828f7898b6d403ab0ee2c2611372e65524c25b1e093fe6f0ff286146b"} Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.323102 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6v829" event={"ID":"94e03be5-809d-49ba-9318-6222131628f5","Type":"ContainerStarted","Data":"1ddc49d3ac552029a70cba19f836098840b08d811a4f18ddc5887959c1deeaf6"} Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.325501 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgvqk" event={"ID":"5607f892-9717-439f-a920-102a2bd3d960","Type":"ContainerStarted","Data":"10d79502f57ca29d080e9753142598555bcee310b2933e4570a1f0619498f923"} Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.327319 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zct7w" event={"ID":"b05ec0ea-cf7e-46ce-9814-a4597ebcf238","Type":"ContainerStarted","Data":"f04d948c4bd5fbf97c9dbf36276c922630c269da2556ed764c208461a423cfb1"} Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.329296 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lbjc" event={"ID":"2aa7a824-734e-401d-b0af-ead8bb03dad5","Type":"ContainerStarted","Data":"242b44373e4553b6a95b1dab9ee35d628ad1d218dbe55524005712a0987bb4b9"} Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.329490 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b6k7d" podUID="41823bd1-3ae0-4f41-847e-d0b35047047c" containerName="registry-server" containerID="cri-o://c8f63962edb37c7a7dfff9c1266ee610ce21ae1ce8663a8c5a1c0d445db5b1bc" gracePeriod=2 Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.364078 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tgvqk" podStartSLOduration=2.530692689 podStartE2EDuration="52.364059392s" podCreationTimestamp="2026-01-30 21:16:59 +0000 UTC" firstStartedPulling="2026-01-30 21:17:00.895937854 +0000 UTC m=+159.641760503" lastFinishedPulling="2026-01-30 21:17:50.729304567 +0000 UTC m=+209.475127206" observedRunningTime="2026-01-30 21:17:51.361017201 +0000 UTC m=+210.106839840" watchObservedRunningTime="2026-01-30 21:17:51.364059392 +0000 UTC m=+210.109882041" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.398635 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6v829" podStartSLOduration=2.5212305329999998 podStartE2EDuration="50.398618589s" podCreationTimestamp="2026-01-30 21:17:01 +0000 UTC" firstStartedPulling="2026-01-30 21:17:02.937569419 +0000 UTC m=+161.683392068" lastFinishedPulling="2026-01-30 21:17:50.814957475 +0000 UTC m=+209.560780124" observedRunningTime="2026-01-30 21:17:51.382211717 +0000 UTC m=+210.128034386" watchObservedRunningTime="2026-01-30 21:17:51.398618589 +0000 UTC m=+210.144441228" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.418092 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8lbjc" podStartSLOduration=2.635512707 podStartE2EDuration="50.418074602s" podCreationTimestamp="2026-01-30 21:17:01 +0000 UTC" firstStartedPulling="2026-01-30 21:17:02.973309978 +0000 UTC m=+161.719132627" lastFinishedPulling="2026-01-30 21:17:50.755871843 +0000 UTC m=+209.501694522" observedRunningTime="2026-01-30 21:17:51.414245087 +0000 UTC m=+210.160067756" watchObservedRunningTime="2026-01-30 21:17:51.418074602 +0000 UTC m=+210.163897271" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.669087 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.669474 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.678181 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.798400 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41823bd1-3ae0-4f41-847e-d0b35047047c-catalog-content\") pod \"41823bd1-3ae0-4f41-847e-d0b35047047c\" (UID: \"41823bd1-3ae0-4f41-847e-d0b35047047c\") " Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.798449 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41823bd1-3ae0-4f41-847e-d0b35047047c-utilities\") pod \"41823bd1-3ae0-4f41-847e-d0b35047047c\" (UID: \"41823bd1-3ae0-4f41-847e-d0b35047047c\") " Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.798552 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj5d8\" (UniqueName: \"kubernetes.io/projected/41823bd1-3ae0-4f41-847e-d0b35047047c-kube-api-access-nj5d8\") pod \"41823bd1-3ae0-4f41-847e-d0b35047047c\" (UID: \"41823bd1-3ae0-4f41-847e-d0b35047047c\") " Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.799415 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41823bd1-3ae0-4f41-847e-d0b35047047c-utilities" (OuterVolumeSpecName: "utilities") pod "41823bd1-3ae0-4f41-847e-d0b35047047c" (UID: "41823bd1-3ae0-4f41-847e-d0b35047047c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.804402 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41823bd1-3ae0-4f41-847e-d0b35047047c-kube-api-access-nj5d8" (OuterVolumeSpecName: "kube-api-access-nj5d8") pod "41823bd1-3ae0-4f41-847e-d0b35047047c" (UID: "41823bd1-3ae0-4f41-847e-d0b35047047c"). InnerVolumeSpecName "kube-api-access-nj5d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.842007 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 21:17:51 crc kubenswrapper[4751]: E0130 21:17:51.842192 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2910afc2-0fe9-492b-8dcf-ddab577f7685" containerName="pruner" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.842203 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2910afc2-0fe9-492b-8dcf-ddab577f7685" containerName="pruner" Jan 30 21:17:51 crc kubenswrapper[4751]: E0130 21:17:51.842218 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41823bd1-3ae0-4f41-847e-d0b35047047c" containerName="registry-server" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.842224 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="41823bd1-3ae0-4f41-847e-d0b35047047c" containerName="registry-server" Jan 30 21:17:51 crc kubenswrapper[4751]: E0130 21:17:51.842235 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41823bd1-3ae0-4f41-847e-d0b35047047c" containerName="extract-utilities" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.842241 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="41823bd1-3ae0-4f41-847e-d0b35047047c" containerName="extract-utilities" Jan 30 21:17:51 crc kubenswrapper[4751]: E0130 21:17:51.842251 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41823bd1-3ae0-4f41-847e-d0b35047047c" containerName="extract-content" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.842257 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="41823bd1-3ae0-4f41-847e-d0b35047047c" containerName="extract-content" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.842376 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="41823bd1-3ae0-4f41-847e-d0b35047047c" containerName="registry-server" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.842405 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2910afc2-0fe9-492b-8dcf-ddab577f7685" containerName="pruner" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.842730 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.844765 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.844765 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.849359 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.850655 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41823bd1-3ae0-4f41-847e-d0b35047047c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41823bd1-3ae0-4f41-847e-d0b35047047c" (UID: "41823bd1-3ae0-4f41-847e-d0b35047047c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.899685 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj5d8\" (UniqueName: \"kubernetes.io/projected/41823bd1-3ae0-4f41-847e-d0b35047047c-kube-api-access-nj5d8\") on node \"crc\" DevicePath \"\"" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.899723 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41823bd1-3ae0-4f41-847e-d0b35047047c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.899757 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41823bd1-3ae0-4f41-847e-d0b35047047c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.000997 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/937199db-2864-42e7-bd7b-65315d94920f-var-lock\") pod \"installer-9-crc\" (UID: \"937199db-2864-42e7-bd7b-65315d94920f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.001384 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/937199db-2864-42e7-bd7b-65315d94920f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"937199db-2864-42e7-bd7b-65315d94920f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.001439 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/937199db-2864-42e7-bd7b-65315d94920f-kube-api-access\") pod \"installer-9-crc\" (UID: \"937199db-2864-42e7-bd7b-65315d94920f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.047525 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.047577 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.101996 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/937199db-2864-42e7-bd7b-65315d94920f-kube-api-access\") pod \"installer-9-crc\" (UID: \"937199db-2864-42e7-bd7b-65315d94920f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.102079 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/937199db-2864-42e7-bd7b-65315d94920f-var-lock\") pod \"installer-9-crc\" (UID: \"937199db-2864-42e7-bd7b-65315d94920f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.102106 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/937199db-2864-42e7-bd7b-65315d94920f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"937199db-2864-42e7-bd7b-65315d94920f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.102180 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/937199db-2864-42e7-bd7b-65315d94920f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"937199db-2864-42e7-bd7b-65315d94920f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.102215 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/937199db-2864-42e7-bd7b-65315d94920f-var-lock\") pod \"installer-9-crc\" (UID: \"937199db-2864-42e7-bd7b-65315d94920f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.135520 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/937199db-2864-42e7-bd7b-65315d94920f-kube-api-access\") pod \"installer-9-crc\" (UID: \"937199db-2864-42e7-bd7b-65315d94920f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.155042 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.337689 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54ffx" event={"ID":"a59ef52d-2f47-42ac-a233-0285be317cc9","Type":"ContainerStarted","Data":"1729cdfa83c5660b5e1741a763d71c952a65fa4fd1d132a64dc5d06c93fbbbb2"} Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.343114 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvkc4" event={"ID":"80287af8-6129-4973-8442-887fa4b3ee9f","Type":"ContainerStarted","Data":"1390ec748689f89777a1f1c34363a9724760856f9473679e8a6408ff0a08227f"} Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.355774 4751 generic.go:334] "Generic (PLEG): container finished" podID="41823bd1-3ae0-4f41-847e-d0b35047047c" containerID="c8f63962edb37c7a7dfff9c1266ee610ce21ae1ce8663a8c5a1c0d445db5b1bc" exitCode=0 Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.355857 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6k7d" event={"ID":"41823bd1-3ae0-4f41-847e-d0b35047047c","Type":"ContainerDied","Data":"c8f63962edb37c7a7dfff9c1266ee610ce21ae1ce8663a8c5a1c0d445db5b1bc"} Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.355858 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.355895 4751 scope.go:117] "RemoveContainer" containerID="c8f63962edb37c7a7dfff9c1266ee610ce21ae1ce8663a8c5a1c0d445db5b1bc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.355885 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6k7d" event={"ID":"41823bd1-3ae0-4f41-847e-d0b35047047c","Type":"ContainerDied","Data":"689c5d4239fa30eae8db15cc294718aa0d2dc9d4b894015fbeb7c4691e93d36c"} Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.363683 4751 generic.go:334] "Generic (PLEG): container finished" podID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" containerID="f04d948c4bd5fbf97c9dbf36276c922630c269da2556ed764c208461a423cfb1" exitCode=0 Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.364516 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zct7w" event={"ID":"b05ec0ea-cf7e-46ce-9814-a4597ebcf238","Type":"ContainerDied","Data":"f04d948c4bd5fbf97c9dbf36276c922630c269da2556ed764c208461a423cfb1"} Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.388089 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-54ffx" podStartSLOduration=2.54675195 podStartE2EDuration="53.388072092s" podCreationTimestamp="2026-01-30 21:16:59 +0000 UTC" firstStartedPulling="2026-01-30 21:17:00.891891001 +0000 UTC m=+159.637713680" lastFinishedPulling="2026-01-30 21:17:51.733211173 +0000 UTC m=+210.479033822" observedRunningTime="2026-01-30 21:17:52.384773443 +0000 UTC m=+211.130596092" watchObservedRunningTime="2026-01-30 21:17:52.388072092 +0000 UTC m=+211.133894731" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.391176 4751 scope.go:117] "RemoveContainer" containerID="49530e2d96dab78725c3bbb47b94300babc691cfe0e97af29fdd989cb06346eb" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.407006 4751 scope.go:117] "RemoveContainer" containerID="b82fbc34d222439d58e4a6b54e9608b232b0b5973669b77f75ff51c83aa9b67d" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.420843 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b6k7d"] Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.424948 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b6k7d"] Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.428918 4751 scope.go:117] "RemoveContainer" containerID="c8f63962edb37c7a7dfff9c1266ee610ce21ae1ce8663a8c5a1c0d445db5b1bc" Jan 30 21:17:52 crc kubenswrapper[4751]: E0130 21:17:52.429375 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8f63962edb37c7a7dfff9c1266ee610ce21ae1ce8663a8c5a1c0d445db5b1bc\": container with ID starting with c8f63962edb37c7a7dfff9c1266ee610ce21ae1ce8663a8c5a1c0d445db5b1bc not found: ID does not exist" containerID="c8f63962edb37c7a7dfff9c1266ee610ce21ae1ce8663a8c5a1c0d445db5b1bc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.429403 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8f63962edb37c7a7dfff9c1266ee610ce21ae1ce8663a8c5a1c0d445db5b1bc"} err="failed to get container status \"c8f63962edb37c7a7dfff9c1266ee610ce21ae1ce8663a8c5a1c0d445db5b1bc\": rpc error: code = NotFound desc = could not find container \"c8f63962edb37c7a7dfff9c1266ee610ce21ae1ce8663a8c5a1c0d445db5b1bc\": container with ID starting with c8f63962edb37c7a7dfff9c1266ee610ce21ae1ce8663a8c5a1c0d445db5b1bc not found: ID does not exist" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.429435 4751 scope.go:117] "RemoveContainer" containerID="49530e2d96dab78725c3bbb47b94300babc691cfe0e97af29fdd989cb06346eb" Jan 30 21:17:52 crc kubenswrapper[4751]: E0130 21:17:52.430661 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49530e2d96dab78725c3bbb47b94300babc691cfe0e97af29fdd989cb06346eb\": container with ID starting with 49530e2d96dab78725c3bbb47b94300babc691cfe0e97af29fdd989cb06346eb not found: ID does not exist" containerID="49530e2d96dab78725c3bbb47b94300babc691cfe0e97af29fdd989cb06346eb" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.430707 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49530e2d96dab78725c3bbb47b94300babc691cfe0e97af29fdd989cb06346eb"} err="failed to get container status \"49530e2d96dab78725c3bbb47b94300babc691cfe0e97af29fdd989cb06346eb\": rpc error: code = NotFound desc = could not find container \"49530e2d96dab78725c3bbb47b94300babc691cfe0e97af29fdd989cb06346eb\": container with ID starting with 49530e2d96dab78725c3bbb47b94300babc691cfe0e97af29fdd989cb06346eb not found: ID does not exist" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.430736 4751 scope.go:117] "RemoveContainer" containerID="b82fbc34d222439d58e4a6b54e9608b232b0b5973669b77f75ff51c83aa9b67d" Jan 30 21:17:52 crc kubenswrapper[4751]: E0130 21:17:52.431452 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b82fbc34d222439d58e4a6b54e9608b232b0b5973669b77f75ff51c83aa9b67d\": container with ID starting with b82fbc34d222439d58e4a6b54e9608b232b0b5973669b77f75ff51c83aa9b67d not found: ID does not exist" containerID="b82fbc34d222439d58e4a6b54e9608b232b0b5973669b77f75ff51c83aa9b67d" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.431479 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b82fbc34d222439d58e4a6b54e9608b232b0b5973669b77f75ff51c83aa9b67d"} err="failed to get container status \"b82fbc34d222439d58e4a6b54e9608b232b0b5973669b77f75ff51c83aa9b67d\": rpc error: code = NotFound desc = could not find container \"b82fbc34d222439d58e4a6b54e9608b232b0b5973669b77f75ff51c83aa9b67d\": container with ID starting with b82fbc34d222439d58e4a6b54e9608b232b0b5973669b77f75ff51c83aa9b67d not found: ID does not exist" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.594513 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 21:17:52 crc kubenswrapper[4751]: W0130 21:17:52.597953 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod937199db_2864_42e7_bd7b_65315d94920f.slice/crio-f462afd414faa46099f0f64a6a8955052851c4a4930523036193d340eca901c4 WatchSource:0}: Error finding container f462afd414faa46099f0f64a6a8955052851c4a4930523036193d340eca901c4: Status 404 returned error can't find the container with id f462afd414faa46099f0f64a6a8955052851c4a4930523036193d340eca901c4 Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.723761 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-6v829" podUID="94e03be5-809d-49ba-9318-6222131628f5" containerName="registry-server" probeResult="failure" output=< Jan 30 21:17:52 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:17:52 crc kubenswrapper[4751]: > Jan 30 21:17:53 crc kubenswrapper[4751]: I0130 21:17:53.091677 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-8lbjc" podUID="2aa7a824-734e-401d-b0af-ead8bb03dad5" containerName="registry-server" probeResult="failure" output=< Jan 30 21:17:53 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:17:53 crc kubenswrapper[4751]: > Jan 30 21:17:53 crc kubenswrapper[4751]: I0130 21:17:53.371153 4751 generic.go:334] "Generic (PLEG): container finished" podID="80287af8-6129-4973-8442-887fa4b3ee9f" containerID="1390ec748689f89777a1f1c34363a9724760856f9473679e8a6408ff0a08227f" exitCode=0 Jan 30 21:17:53 crc kubenswrapper[4751]: I0130 21:17:53.371211 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvkc4" event={"ID":"80287af8-6129-4973-8442-887fa4b3ee9f","Type":"ContainerDied","Data":"1390ec748689f89777a1f1c34363a9724760856f9473679e8a6408ff0a08227f"} Jan 30 21:17:53 crc kubenswrapper[4751]: I0130 21:17:53.374316 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"937199db-2864-42e7-bd7b-65315d94920f","Type":"ContainerStarted","Data":"e69a32abf266db71cf32cbc11401a25e95afb6e6d4db9827794b0fd5f381fb26"} Jan 30 21:17:53 crc kubenswrapper[4751]: I0130 21:17:53.374363 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"937199db-2864-42e7-bd7b-65315d94920f","Type":"ContainerStarted","Data":"f462afd414faa46099f0f64a6a8955052851c4a4930523036193d340eca901c4"} Jan 30 21:17:53 crc kubenswrapper[4751]: I0130 21:17:53.376109 4751 generic.go:334] "Generic (PLEG): container finished" podID="5de678c2-f43a-44fa-ab58-259f765c3e31" containerID="32572f27c9ea03ce6c7b9dc8b2a5e53bbd80e2ecb4680f00bc3edf45a75a5c89" exitCode=0 Jan 30 21:17:53 crc kubenswrapper[4751]: I0130 21:17:53.376156 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvvq8" event={"ID":"5de678c2-f43a-44fa-ab58-259f765c3e31","Type":"ContainerDied","Data":"32572f27c9ea03ce6c7b9dc8b2a5e53bbd80e2ecb4680f00bc3edf45a75a5c89"} Jan 30 21:17:53 crc kubenswrapper[4751]: I0130 21:17:53.380043 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zct7w" event={"ID":"b05ec0ea-cf7e-46ce-9814-a4597ebcf238","Type":"ContainerStarted","Data":"6eccfc4d6ba2618226d304c5b2bd1fb297b15f5857edfd28a3313f58debc08a7"} Jan 30 21:17:53 crc kubenswrapper[4751]: I0130 21:17:53.406696 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zct7w" podStartSLOduration=2.563045779 podStartE2EDuration="51.406674199s" podCreationTimestamp="2026-01-30 21:17:02 +0000 UTC" firstStartedPulling="2026-01-30 21:17:03.982107487 +0000 UTC m=+162.727930126" lastFinishedPulling="2026-01-30 21:17:52.825735897 +0000 UTC m=+211.571558546" observedRunningTime="2026-01-30 21:17:53.404417012 +0000 UTC m=+212.150239681" watchObservedRunningTime="2026-01-30 21:17:53.406674199 +0000 UTC m=+212.152496868" Jan 30 21:17:53 crc kubenswrapper[4751]: I0130 21:17:53.419182 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.419158494 podStartE2EDuration="2.419158494s" podCreationTimestamp="2026-01-30 21:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:17:53.416838724 +0000 UTC m=+212.162661383" watchObservedRunningTime="2026-01-30 21:17:53.419158494 +0000 UTC m=+212.164981183" Jan 30 21:17:53 crc kubenswrapper[4751]: I0130 21:17:53.985090 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41823bd1-3ae0-4f41-847e-d0b35047047c" path="/var/lib/kubelet/pods/41823bd1-3ae0-4f41-847e-d0b35047047c/volumes" Jan 30 21:17:54 crc kubenswrapper[4751]: I0130 21:17:54.127027 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:17:54 crc kubenswrapper[4751]: I0130 21:17:54.127387 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:17:54 crc kubenswrapper[4751]: I0130 21:17:54.127427 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:17:54 crc kubenswrapper[4751]: I0130 21:17:54.127821 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:17:54 crc kubenswrapper[4751]: I0130 21:17:54.127874 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125" gracePeriod=600 Jan 30 21:17:54 crc kubenswrapper[4751]: I0130 21:17:54.388642 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvvq8" event={"ID":"5de678c2-f43a-44fa-ab58-259f765c3e31","Type":"ContainerStarted","Data":"6374876bc7ed115e17f1c5d36bfa76b152a97c3a41fab6547007a48a90913470"} Jan 30 21:17:54 crc kubenswrapper[4751]: I0130 21:17:54.391743 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvkc4" event={"ID":"80287af8-6129-4973-8442-887fa4b3ee9f","Type":"ContainerStarted","Data":"3421b4190428564de2526db739509fd62498485491cdb7f40a973dab016062f2"} Jan 30 21:17:54 crc kubenswrapper[4751]: I0130 21:17:54.398534 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125" exitCode=0 Jan 30 21:17:54 crc kubenswrapper[4751]: I0130 21:17:54.399115 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125"} Jan 30 21:17:54 crc kubenswrapper[4751]: I0130 21:17:54.415532 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wvvq8" podStartSLOduration=2.311230155 podStartE2EDuration="55.415445131s" podCreationTimestamp="2026-01-30 21:16:59 +0000 UTC" firstStartedPulling="2026-01-30 21:17:00.891841919 +0000 UTC m=+159.637664558" lastFinishedPulling="2026-01-30 21:17:53.996056875 +0000 UTC m=+212.741879534" observedRunningTime="2026-01-30 21:17:54.411784412 +0000 UTC m=+213.157607061" watchObservedRunningTime="2026-01-30 21:17:54.415445131 +0000 UTC m=+213.161267780" Jan 30 21:17:54 crc kubenswrapper[4751]: I0130 21:17:54.428719 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fvkc4" podStartSLOduration=2.653403521 podStartE2EDuration="52.428707389s" podCreationTimestamp="2026-01-30 21:17:02 +0000 UTC" firstStartedPulling="2026-01-30 21:17:03.983868342 +0000 UTC m=+162.729690991" lastFinishedPulling="2026-01-30 21:17:53.7591722 +0000 UTC m=+212.504994859" observedRunningTime="2026-01-30 21:17:54.427114282 +0000 UTC m=+213.172936931" watchObservedRunningTime="2026-01-30 21:17:54.428707389 +0000 UTC m=+213.174530028" Jan 30 21:17:55 crc kubenswrapper[4751]: I0130 21:17:55.406448 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"daa3657d48b883db14b4975f24f93c0b2c6f7eb8738d3c0267f1f4f003ba63aa"} Jan 30 21:17:59 crc kubenswrapper[4751]: I0130 21:17:59.471956 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:17:59 crc kubenswrapper[4751]: I0130 21:17:59.472684 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:17:59 crc kubenswrapper[4751]: I0130 21:17:59.517671 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:17:59 crc kubenswrapper[4751]: I0130 21:17:59.657992 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:17:59 crc kubenswrapper[4751]: I0130 21:17:59.658064 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:17:59 crc kubenswrapper[4751]: I0130 21:17:59.693042 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:18:00 crc kubenswrapper[4751]: I0130 21:18:00.119590 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:18:00 crc kubenswrapper[4751]: I0130 21:18:00.120478 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:18:00 crc kubenswrapper[4751]: I0130 21:18:00.169533 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:18:00 crc kubenswrapper[4751]: I0130 21:18:00.501292 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:18:00 crc kubenswrapper[4751]: I0130 21:18:00.507863 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:18:00 crc kubenswrapper[4751]: I0130 21:18:00.509546 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:18:01 crc kubenswrapper[4751]: I0130 21:18:01.746911 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:18:01 crc kubenswrapper[4751]: I0130 21:18:01.819986 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:18:02 crc kubenswrapper[4751]: I0130 21:18:02.008871 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tgvqk"] Jan 30 21:18:02 crc kubenswrapper[4751]: I0130 21:18:02.133565 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:18:02 crc kubenswrapper[4751]: I0130 21:18:02.197661 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:18:02 crc kubenswrapper[4751]: I0130 21:18:02.674008 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:18:02 crc kubenswrapper[4751]: I0130 21:18:02.674466 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:18:02 crc kubenswrapper[4751]: I0130 21:18:02.762539 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:18:02 crc kubenswrapper[4751]: I0130 21:18:02.767546 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6dcxn"] Jan 30 21:18:03 crc kubenswrapper[4751]: I0130 21:18:03.060740 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:18:03 crc kubenswrapper[4751]: I0130 21:18:03.061061 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:18:03 crc kubenswrapper[4751]: I0130 21:18:03.101968 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:18:03 crc kubenswrapper[4751]: I0130 21:18:03.465001 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tgvqk" podUID="5607f892-9717-439f-a920-102a2bd3d960" containerName="registry-server" containerID="cri-o://10d79502f57ca29d080e9753142598555bcee310b2933e4570a1f0619498f923" gracePeriod=2 Jan 30 21:18:03 crc kubenswrapper[4751]: I0130 21:18:03.518917 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:18:03 crc kubenswrapper[4751]: I0130 21:18:03.533288 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:18:04 crc kubenswrapper[4751]: I0130 21:18:04.429608 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8lbjc"] Jan 30 21:18:04 crc kubenswrapper[4751]: I0130 21:18:04.430788 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8lbjc" podUID="2aa7a824-734e-401d-b0af-ead8bb03dad5" containerName="registry-server" containerID="cri-o://242b44373e4553b6a95b1dab9ee35d628ad1d218dbe55524005712a0987bb4b9" gracePeriod=2 Jan 30 21:18:04 crc kubenswrapper[4751]: I0130 21:18:04.474106 4751 generic.go:334] "Generic (PLEG): container finished" podID="5607f892-9717-439f-a920-102a2bd3d960" containerID="10d79502f57ca29d080e9753142598555bcee310b2933e4570a1f0619498f923" exitCode=0 Jan 30 21:18:04 crc kubenswrapper[4751]: I0130 21:18:04.474196 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgvqk" event={"ID":"5607f892-9717-439f-a920-102a2bd3d960","Type":"ContainerDied","Data":"10d79502f57ca29d080e9753142598555bcee310b2933e4570a1f0619498f923"} Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.484805 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgvqk" event={"ID":"5607f892-9717-439f-a920-102a2bd3d960","Type":"ContainerDied","Data":"d742ab7f8f1e8c741124ab96be31bce53b84eb48f204dc5b3fc704a32bc25d11"} Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.485292 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d742ab7f8f1e8c741124ab96be31bce53b84eb48f204dc5b3fc704a32bc25d11" Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.488211 4751 generic.go:334] "Generic (PLEG): container finished" podID="2aa7a824-734e-401d-b0af-ead8bb03dad5" containerID="242b44373e4553b6a95b1dab9ee35d628ad1d218dbe55524005712a0987bb4b9" exitCode=0 Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.488671 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lbjc" event={"ID":"2aa7a824-734e-401d-b0af-ead8bb03dad5","Type":"ContainerDied","Data":"242b44373e4553b6a95b1dab9ee35d628ad1d218dbe55524005712a0987bb4b9"} Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.516356 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.573817 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5607f892-9717-439f-a920-102a2bd3d960-catalog-content\") pod \"5607f892-9717-439f-a920-102a2bd3d960\" (UID: \"5607f892-9717-439f-a920-102a2bd3d960\") " Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.574050 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5607f892-9717-439f-a920-102a2bd3d960-utilities\") pod \"5607f892-9717-439f-a920-102a2bd3d960\" (UID: \"5607f892-9717-439f-a920-102a2bd3d960\") " Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.574094 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p25b9\" (UniqueName: \"kubernetes.io/projected/5607f892-9717-439f-a920-102a2bd3d960-kube-api-access-p25b9\") pod \"5607f892-9717-439f-a920-102a2bd3d960\" (UID: \"5607f892-9717-439f-a920-102a2bd3d960\") " Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.576095 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5607f892-9717-439f-a920-102a2bd3d960-utilities" (OuterVolumeSpecName: "utilities") pod "5607f892-9717-439f-a920-102a2bd3d960" (UID: "5607f892-9717-439f-a920-102a2bd3d960"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.585704 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5607f892-9717-439f-a920-102a2bd3d960-kube-api-access-p25b9" (OuterVolumeSpecName: "kube-api-access-p25b9") pod "5607f892-9717-439f-a920-102a2bd3d960" (UID: "5607f892-9717-439f-a920-102a2bd3d960"). InnerVolumeSpecName "kube-api-access-p25b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.647876 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5607f892-9717-439f-a920-102a2bd3d960-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5607f892-9717-439f-a920-102a2bd3d960" (UID: "5607f892-9717-439f-a920-102a2bd3d960"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.676847 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5607f892-9717-439f-a920-102a2bd3d960-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.676876 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5607f892-9717-439f-a920-102a2bd3d960-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.676886 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p25b9\" (UniqueName: \"kubernetes.io/projected/5607f892-9717-439f-a920-102a2bd3d960-kube-api-access-p25b9\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.498281 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.528434 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tgvqk"] Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.534395 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tgvqk"] Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.681992 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.706519 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa7a824-734e-401d-b0af-ead8bb03dad5-utilities\") pod \"2aa7a824-734e-401d-b0af-ead8bb03dad5\" (UID: \"2aa7a824-734e-401d-b0af-ead8bb03dad5\") " Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.706814 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa7a824-734e-401d-b0af-ead8bb03dad5-catalog-content\") pod \"2aa7a824-734e-401d-b0af-ead8bb03dad5\" (UID: \"2aa7a824-734e-401d-b0af-ead8bb03dad5\") " Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.708753 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aa7a824-734e-401d-b0af-ead8bb03dad5-utilities" (OuterVolumeSpecName: "utilities") pod "2aa7a824-734e-401d-b0af-ead8bb03dad5" (UID: "2aa7a824-734e-401d-b0af-ead8bb03dad5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.763109 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aa7a824-734e-401d-b0af-ead8bb03dad5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2aa7a824-734e-401d-b0af-ead8bb03dad5" (UID: "2aa7a824-734e-401d-b0af-ead8bb03dad5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.808123 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jskpc\" (UniqueName: \"kubernetes.io/projected/2aa7a824-734e-401d-b0af-ead8bb03dad5-kube-api-access-jskpc\") pod \"2aa7a824-734e-401d-b0af-ead8bb03dad5\" (UID: \"2aa7a824-734e-401d-b0af-ead8bb03dad5\") " Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.809131 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fvkc4"] Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.809583 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa7a824-734e-401d-b0af-ead8bb03dad5-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.809612 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa7a824-734e-401d-b0af-ead8bb03dad5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.810613 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fvkc4" podUID="80287af8-6129-4973-8442-887fa4b3ee9f" containerName="registry-server" containerID="cri-o://3421b4190428564de2526db739509fd62498485491cdb7f40a973dab016062f2" gracePeriod=2 Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.814857 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aa7a824-734e-401d-b0af-ead8bb03dad5-kube-api-access-jskpc" (OuterVolumeSpecName: "kube-api-access-jskpc") pod "2aa7a824-734e-401d-b0af-ead8bb03dad5" (UID: "2aa7a824-734e-401d-b0af-ead8bb03dad5"). InnerVolumeSpecName "kube-api-access-jskpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.911242 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jskpc\" (UniqueName: \"kubernetes.io/projected/2aa7a824-734e-401d-b0af-ead8bb03dad5-kube-api-access-jskpc\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:07 crc kubenswrapper[4751]: I0130 21:18:07.504616 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lbjc" event={"ID":"2aa7a824-734e-401d-b0af-ead8bb03dad5","Type":"ContainerDied","Data":"24f317d1701097d9103031354b6663adbe17eff186ff15234f4ba88c7fab3126"} Jan 30 21:18:07 crc kubenswrapper[4751]: I0130 21:18:07.504665 4751 scope.go:117] "RemoveContainer" containerID="242b44373e4553b6a95b1dab9ee35d628ad1d218dbe55524005712a0987bb4b9" Jan 30 21:18:07 crc kubenswrapper[4751]: I0130 21:18:07.504762 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:18:07 crc kubenswrapper[4751]: I0130 21:18:07.518165 4751 generic.go:334] "Generic (PLEG): container finished" podID="80287af8-6129-4973-8442-887fa4b3ee9f" containerID="3421b4190428564de2526db739509fd62498485491cdb7f40a973dab016062f2" exitCode=0 Jan 30 21:18:07 crc kubenswrapper[4751]: I0130 21:18:07.518207 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvkc4" event={"ID":"80287af8-6129-4973-8442-887fa4b3ee9f","Type":"ContainerDied","Data":"3421b4190428564de2526db739509fd62498485491cdb7f40a973dab016062f2"} Jan 30 21:18:07 crc kubenswrapper[4751]: I0130 21:18:07.528895 4751 scope.go:117] "RemoveContainer" containerID="b07ef308640fd17ca101597385790cdc7d8a83b7a8df7bce4290518e0c697c43" Jan 30 21:18:07 crc kubenswrapper[4751]: I0130 21:18:07.550730 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8lbjc"] Jan 30 21:18:07 crc kubenswrapper[4751]: I0130 21:18:07.553413 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8lbjc"] Jan 30 21:18:07 crc kubenswrapper[4751]: I0130 21:18:07.561171 4751 scope.go:117] "RemoveContainer" containerID="01546679b55fd82a5346039e7e8bf30c9a6fe860dba2c776bd0984b001c41248" Jan 30 21:18:07 crc kubenswrapper[4751]: I0130 21:18:07.827640 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:18:07 crc kubenswrapper[4751]: I0130 21:18:07.986352 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aa7a824-734e-401d-b0af-ead8bb03dad5" path="/var/lib/kubelet/pods/2aa7a824-734e-401d-b0af-ead8bb03dad5/volumes" Jan 30 21:18:07 crc kubenswrapper[4751]: I0130 21:18:07.987153 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5607f892-9717-439f-a920-102a2bd3d960" path="/var/lib/kubelet/pods/5607f892-9717-439f-a920-102a2bd3d960/volumes" Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.024956 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80287af8-6129-4973-8442-887fa4b3ee9f-utilities\") pod \"80287af8-6129-4973-8442-887fa4b3ee9f\" (UID: \"80287af8-6129-4973-8442-887fa4b3ee9f\") " Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.025050 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ksmq\" (UniqueName: \"kubernetes.io/projected/80287af8-6129-4973-8442-887fa4b3ee9f-kube-api-access-5ksmq\") pod \"80287af8-6129-4973-8442-887fa4b3ee9f\" (UID: \"80287af8-6129-4973-8442-887fa4b3ee9f\") " Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.025089 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80287af8-6129-4973-8442-887fa4b3ee9f-catalog-content\") pod \"80287af8-6129-4973-8442-887fa4b3ee9f\" (UID: \"80287af8-6129-4973-8442-887fa4b3ee9f\") " Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.027274 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80287af8-6129-4973-8442-887fa4b3ee9f-utilities" (OuterVolumeSpecName: "utilities") pod "80287af8-6129-4973-8442-887fa4b3ee9f" (UID: "80287af8-6129-4973-8442-887fa4b3ee9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.040063 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80287af8-6129-4973-8442-887fa4b3ee9f-kube-api-access-5ksmq" (OuterVolumeSpecName: "kube-api-access-5ksmq") pod "80287af8-6129-4973-8442-887fa4b3ee9f" (UID: "80287af8-6129-4973-8442-887fa4b3ee9f"). InnerVolumeSpecName "kube-api-access-5ksmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.127371 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80287af8-6129-4973-8442-887fa4b3ee9f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.127417 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ksmq\" (UniqueName: \"kubernetes.io/projected/80287af8-6129-4973-8442-887fa4b3ee9f-kube-api-access-5ksmq\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.195364 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80287af8-6129-4973-8442-887fa4b3ee9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80287af8-6129-4973-8442-887fa4b3ee9f" (UID: "80287af8-6129-4973-8442-887fa4b3ee9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.228392 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80287af8-6129-4973-8442-887fa4b3ee9f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.530989 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvkc4" event={"ID":"80287af8-6129-4973-8442-887fa4b3ee9f","Type":"ContainerDied","Data":"dc6fc5c63903f1bd0c4e0a90425019daa79c25f9ce21c6dcff83a787794afb40"} Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.531048 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.531392 4751 scope.go:117] "RemoveContainer" containerID="3421b4190428564de2526db739509fd62498485491cdb7f40a973dab016062f2" Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.557022 4751 scope.go:117] "RemoveContainer" containerID="1390ec748689f89777a1f1c34363a9724760856f9473679e8a6408ff0a08227f" Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.573178 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fvkc4"] Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.582838 4751 scope.go:117] "RemoveContainer" containerID="4f7f32ebba510377188fdb9f775c5bdc1a0070f2a59bec9d0e32afa0fdd36c30" Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.582839 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fvkc4"] Jan 30 21:18:09 crc kubenswrapper[4751]: I0130 21:18:09.986237 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80287af8-6129-4973-8442-887fa4b3ee9f" path="/var/lib/kubelet/pods/80287af8-6129-4973-8442-887fa4b3ee9f/volumes" Jan 30 21:18:27 crc kubenswrapper[4751]: I0130 21:18:27.794975 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" podUID="8a52a543-c530-48d9-a046-ac4008df0477" containerName="oauth-openshift" containerID="cri-o://c055a298ab4ac470125ea52e0402fa36c68ae7885b742532ed40b326547b365e" gracePeriod=15 Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.206713 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.277980 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5f7896898-cgrzp"] Jan 30 21:18:28 crc kubenswrapper[4751]: E0130 21:18:28.278649 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80287af8-6129-4973-8442-887fa4b3ee9f" containerName="extract-content" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.278670 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="80287af8-6129-4973-8442-887fa4b3ee9f" containerName="extract-content" Jan 30 21:18:28 crc kubenswrapper[4751]: E0130 21:18:28.278684 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa7a824-734e-401d-b0af-ead8bb03dad5" containerName="extract-content" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.278695 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa7a824-734e-401d-b0af-ead8bb03dad5" containerName="extract-content" Jan 30 21:18:28 crc kubenswrapper[4751]: E0130 21:18:28.278712 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5607f892-9717-439f-a920-102a2bd3d960" containerName="registry-server" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.278723 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5607f892-9717-439f-a920-102a2bd3d960" containerName="registry-server" Jan 30 21:18:28 crc kubenswrapper[4751]: E0130 21:18:28.278746 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa7a824-734e-401d-b0af-ead8bb03dad5" containerName="extract-utilities" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.278757 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa7a824-734e-401d-b0af-ead8bb03dad5" containerName="extract-utilities" Jan 30 21:18:28 crc kubenswrapper[4751]: E0130 21:18:28.278772 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80287af8-6129-4973-8442-887fa4b3ee9f" containerName="registry-server" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.278782 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="80287af8-6129-4973-8442-887fa4b3ee9f" containerName="registry-server" Jan 30 21:18:28 crc kubenswrapper[4751]: E0130 21:18:28.278796 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a52a543-c530-48d9-a046-ac4008df0477" containerName="oauth-openshift" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.278806 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a52a543-c530-48d9-a046-ac4008df0477" containerName="oauth-openshift" Jan 30 21:18:28 crc kubenswrapper[4751]: E0130 21:18:28.278820 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80287af8-6129-4973-8442-887fa4b3ee9f" containerName="extract-utilities" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.278832 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="80287af8-6129-4973-8442-887fa4b3ee9f" containerName="extract-utilities" Jan 30 21:18:28 crc kubenswrapper[4751]: E0130 21:18:28.278849 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5607f892-9717-439f-a920-102a2bd3d960" containerName="extract-content" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.278860 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5607f892-9717-439f-a920-102a2bd3d960" containerName="extract-content" Jan 30 21:18:28 crc kubenswrapper[4751]: E0130 21:18:28.278880 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa7a824-734e-401d-b0af-ead8bb03dad5" containerName="registry-server" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.278890 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa7a824-734e-401d-b0af-ead8bb03dad5" containerName="registry-server" Jan 30 21:18:28 crc kubenswrapper[4751]: E0130 21:18:28.278907 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5607f892-9717-439f-a920-102a2bd3d960" containerName="extract-utilities" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.278917 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5607f892-9717-439f-a920-102a2bd3d960" containerName="extract-utilities" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.279074 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5607f892-9717-439f-a920-102a2bd3d960" containerName="registry-server" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.279096 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aa7a824-734e-401d-b0af-ead8bb03dad5" containerName="registry-server" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.279113 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="80287af8-6129-4973-8442-887fa4b3ee9f" containerName="registry-server" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.279144 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a52a543-c530-48d9-a046-ac4008df0477" containerName="oauth-openshift" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.279809 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.287061 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5f7896898-cgrzp"] Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.320852 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-login\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.321154 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-serving-cert\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.322115 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-trusted-ca-bundle\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.322401 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkplm\" (UniqueName: \"kubernetes.io/projected/8a52a543-c530-48d9-a046-ac4008df0477-kube-api-access-qkplm\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.322555 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-session\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.322776 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-idp-0-file-data\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.322900 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-audit-policies\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.323045 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-router-certs\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.323252 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-provider-selection\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.323807 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-service-ca\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.323960 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-ocp-branding-template\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.324155 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-error\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.324378 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a52a543-c530-48d9-a046-ac4008df0477-audit-dir\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.324973 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-cliconfig\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.325441 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-service-ca\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.325590 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40e354d8-a733-4531-b68c-d44b182050f3-audit-policies\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.325755 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-router-certs\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.325897 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.326057 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.326190 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.326408 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-user-template-error\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.326682 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.326852 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.327003 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6kgz\" (UniqueName: \"kubernetes.io/projected/40e354d8-a733-4531-b68c-d44b182050f3-kube-api-access-d6kgz\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.327147 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.327291 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40e354d8-a733-4531-b68c-d44b182050f3-audit-dir\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.327474 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-session\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.323097 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.327607 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-user-template-login\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.323743 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.324736 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.324819 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a52a543-c530-48d9-a046-ac4008df0477-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.325588 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.327769 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.329970 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.330669 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.342026 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a52a543-c530-48d9-a046-ac4008df0477-kube-api-access-qkplm" (OuterVolumeSpecName: "kube-api-access-qkplm") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "kube-api-access-qkplm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.342175 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.342645 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.343632 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.343865 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.344052 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.344380 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.428415 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-user-template-error\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.428496 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.428555 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.428592 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6kgz\" (UniqueName: \"kubernetes.io/projected/40e354d8-a733-4531-b68c-d44b182050f3-kube-api-access-d6kgz\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.428635 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.428668 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40e354d8-a733-4531-b68c-d44b182050f3-audit-dir\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.428708 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-session\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.428739 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-user-template-login\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.428914 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-service-ca\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.428964 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40e354d8-a733-4531-b68c-d44b182050f3-audit-policies\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429005 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-router-certs\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429039 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429096 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429127 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429276 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkplm\" (UniqueName: \"kubernetes.io/projected/8a52a543-c530-48d9-a046-ac4008df0477-kube-api-access-qkplm\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429308 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429364 4751 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429389 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429416 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429445 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429473 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429500 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429527 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429568 4751 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a52a543-c530-48d9-a046-ac4008df0477-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429570 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40e354d8-a733-4531-b68c-d44b182050f3-audit-dir\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429593 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429690 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429725 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.430651 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-service-ca\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.431510 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.433826 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.435008 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-user-template-error\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.435058 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.435678 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40e354d8-a733-4531-b68c-d44b182050f3-audit-policies\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.436254 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-session\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.437118 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-user-template-login\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.437573 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.437646 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.441067 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-router-certs\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.441864 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.459255 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6kgz\" (UniqueName: \"kubernetes.io/projected/40e354d8-a733-4531-b68c-d44b182050f3-kube-api-access-d6kgz\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.610152 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.648831 4751 generic.go:334] "Generic (PLEG): container finished" podID="8a52a543-c530-48d9-a046-ac4008df0477" containerID="c055a298ab4ac470125ea52e0402fa36c68ae7885b742532ed40b326547b365e" exitCode=0 Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.648892 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" event={"ID":"8a52a543-c530-48d9-a046-ac4008df0477","Type":"ContainerDied","Data":"c055a298ab4ac470125ea52e0402fa36c68ae7885b742532ed40b326547b365e"} Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.648938 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" event={"ID":"8a52a543-c530-48d9-a046-ac4008df0477","Type":"ContainerDied","Data":"85f9f12a183ee9ac32edf469f266b83c69141757b64a96e9390b64f35e4d5e44"} Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.648958 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.648966 4751 scope.go:117] "RemoveContainer" containerID="c055a298ab4ac470125ea52e0402fa36c68ae7885b742532ed40b326547b365e" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.681866 4751 scope.go:117] "RemoveContainer" containerID="c055a298ab4ac470125ea52e0402fa36c68ae7885b742532ed40b326547b365e" Jan 30 21:18:28 crc kubenswrapper[4751]: E0130 21:18:28.682956 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c055a298ab4ac470125ea52e0402fa36c68ae7885b742532ed40b326547b365e\": container with ID starting with c055a298ab4ac470125ea52e0402fa36c68ae7885b742532ed40b326547b365e not found: ID does not exist" containerID="c055a298ab4ac470125ea52e0402fa36c68ae7885b742532ed40b326547b365e" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.683008 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c055a298ab4ac470125ea52e0402fa36c68ae7885b742532ed40b326547b365e"} err="failed to get container status \"c055a298ab4ac470125ea52e0402fa36c68ae7885b742532ed40b326547b365e\": rpc error: code = NotFound desc = could not find container \"c055a298ab4ac470125ea52e0402fa36c68ae7885b742532ed40b326547b365e\": container with ID starting with c055a298ab4ac470125ea52e0402fa36c68ae7885b742532ed40b326547b365e not found: ID does not exist" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.708740 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6dcxn"] Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.714642 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6dcxn"] Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.904388 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5f7896898-cgrzp"] Jan 30 21:18:29 crc kubenswrapper[4751]: I0130 21:18:29.657163 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" event={"ID":"40e354d8-a733-4531-b68c-d44b182050f3","Type":"ContainerStarted","Data":"fa1d9120b4afe6269fc9623b0dce2ed9a09009cdcee2400d35f01181f26e66d3"} Jan 30 21:18:29 crc kubenswrapper[4751]: I0130 21:18:29.657628 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:29 crc kubenswrapper[4751]: I0130 21:18:29.657645 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" event={"ID":"40e354d8-a733-4531-b68c-d44b182050f3","Type":"ContainerStarted","Data":"1415451051e5a56955dbb44bc54385482c06f8c731615dee5b327feb9a6fecf4"} Jan 30 21:18:29 crc kubenswrapper[4751]: I0130 21:18:29.688754 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" podStartSLOduration=27.688735288 podStartE2EDuration="27.688735288s" podCreationTimestamp="2026-01-30 21:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:18:29.686888633 +0000 UTC m=+248.432711282" watchObservedRunningTime="2026-01-30 21:18:29.688735288 +0000 UTC m=+248.434557947" Jan 30 21:18:29 crc kubenswrapper[4751]: I0130 21:18:29.863661 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:29 crc kubenswrapper[4751]: I0130 21:18:29.981660 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a52a543-c530-48d9-a046-ac4008df0477" path="/var/lib/kubelet/pods/8a52a543-c530-48d9-a046-ac4008df0477/volumes" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.391429 4751 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.393017 4751 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.393255 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.393659 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1" gracePeriod=15 Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.393885 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b" gracePeriod=15 Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.394001 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1" gracePeriod=15 Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.394050 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d" gracePeriod=15 Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.394677 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9" gracePeriod=15 Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.395734 4751 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 21:18:30 crc kubenswrapper[4751]: E0130 21:18:30.396312 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.396385 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:18:30 crc kubenswrapper[4751]: E0130 21:18:30.396485 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.396501 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 21:18:30 crc kubenswrapper[4751]: E0130 21:18:30.396554 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.396573 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 21:18:30 crc kubenswrapper[4751]: E0130 21:18:30.396601 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.396618 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 21:18:30 crc kubenswrapper[4751]: E0130 21:18:30.396686 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.396706 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 21:18:30 crc kubenswrapper[4751]: E0130 21:18:30.396733 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.396749 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.397079 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.397113 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.397131 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.397153 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.397178 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.397203 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 21:18:30 crc kubenswrapper[4751]: E0130 21:18:30.397567 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.397602 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.455927 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.482452 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.482497 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.482518 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.482538 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.482600 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.482620 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.482639 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.482763 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583363 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583411 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583430 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583459 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583483 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583507 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583521 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583525 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583579 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583587 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583580 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583618 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583630 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583633 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583539 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583657 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.602275 4751 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.602351 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.670957 4751 generic.go:334] "Generic (PLEG): container finished" podID="937199db-2864-42e7-bd7b-65315d94920f" containerID="e69a32abf266db71cf32cbc11401a25e95afb6e6d4db9827794b0fd5f381fb26" exitCode=0 Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.671095 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"937199db-2864-42e7-bd7b-65315d94920f","Type":"ContainerDied","Data":"e69a32abf266db71cf32cbc11401a25e95afb6e6d4db9827794b0fd5f381fb26"} Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.672163 4751 status_manager.go:851] "Failed to get status for pod" podUID="937199db-2864-42e7-bd7b-65315d94920f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.672660 4751 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.672967 4751 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.674388 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.675983 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.676899 4751 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d" exitCode=0 Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.676922 4751 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b" exitCode=0 Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.676931 4751 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9" exitCode=0 Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.676943 4751 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1" exitCode=2 Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.677006 4751 scope.go:117] "RemoveContainer" containerID="0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.757053 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: W0130 21:18:30.782574 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-742a27aba0703b555c7fdffcc4a61f692930de201498f6f95a1e97db92dc34ad WatchSource:0}: Error finding container 742a27aba0703b555c7fdffcc4a61f692930de201498f6f95a1e97db92dc34ad: Status 404 returned error can't find the container with id 742a27aba0703b555c7fdffcc4a61f692930de201498f6f95a1e97db92dc34ad Jan 30 21:18:30 crc kubenswrapper[4751]: E0130 21:18:30.786458 4751 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.39:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f9eeb030f560c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 21:18:30.785734156 +0000 UTC m=+249.531556845,LastTimestamp:2026-01-30 21:18:30.785734156 +0000 UTC m=+249.531556845,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 21:18:31 crc kubenswrapper[4751]: I0130 21:18:31.688102 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"07daea61c26f8db733b7f11ca40decf928b78f0906e37dcb22d9a1f26b54e84b"} Jan 30 21:18:31 crc kubenswrapper[4751]: I0130 21:18:31.688654 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"742a27aba0703b555c7fdffcc4a61f692930de201498f6f95a1e97db92dc34ad"} Jan 30 21:18:31 crc kubenswrapper[4751]: I0130 21:18:31.689098 4751 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:31 crc kubenswrapper[4751]: I0130 21:18:31.689537 4751 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:31 crc kubenswrapper[4751]: I0130 21:18:31.689874 4751 status_manager.go:851] "Failed to get status for pod" podUID="937199db-2864-42e7-bd7b-65315d94920f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:31 crc kubenswrapper[4751]: I0130 21:18:31.693126 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 21:18:31 crc kubenswrapper[4751]: E0130 21:18:31.904196 4751 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:31 crc kubenswrapper[4751]: E0130 21:18:31.905990 4751 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:31 crc kubenswrapper[4751]: E0130 21:18:31.906690 4751 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:31 crc kubenswrapper[4751]: E0130 21:18:31.907403 4751 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:31 crc kubenswrapper[4751]: E0130 21:18:31.908058 4751 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:31 crc kubenswrapper[4751]: I0130 21:18:31.908107 4751 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 21:18:31 crc kubenswrapper[4751]: E0130 21:18:31.908672 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="200ms" Jan 30 21:18:31 crc kubenswrapper[4751]: I0130 21:18:31.980503 4751 status_manager.go:851] "Failed to get status for pod" podUID="937199db-2864-42e7-bd7b-65315d94920f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:31 crc kubenswrapper[4751]: I0130 21:18:31.980962 4751 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:31 crc kubenswrapper[4751]: I0130 21:18:31.981397 4751 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.034276 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.034983 4751 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.035533 4751 status_manager.go:851] "Failed to get status for pod" podUID="937199db-2864-42e7-bd7b-65315d94920f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:32 crc kubenswrapper[4751]: E0130 21:18:32.109828 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="400ms" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.117218 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/937199db-2864-42e7-bd7b-65315d94920f-kubelet-dir\") pod \"937199db-2864-42e7-bd7b-65315d94920f\" (UID: \"937199db-2864-42e7-bd7b-65315d94920f\") " Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.117393 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/937199db-2864-42e7-bd7b-65315d94920f-kube-api-access\") pod \"937199db-2864-42e7-bd7b-65315d94920f\" (UID: \"937199db-2864-42e7-bd7b-65315d94920f\") " Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.117465 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/937199db-2864-42e7-bd7b-65315d94920f-var-lock\") pod \"937199db-2864-42e7-bd7b-65315d94920f\" (UID: \"937199db-2864-42e7-bd7b-65315d94920f\") " Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.117481 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/937199db-2864-42e7-bd7b-65315d94920f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "937199db-2864-42e7-bd7b-65315d94920f" (UID: "937199db-2864-42e7-bd7b-65315d94920f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.117587 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/937199db-2864-42e7-bd7b-65315d94920f-var-lock" (OuterVolumeSpecName: "var-lock") pod "937199db-2864-42e7-bd7b-65315d94920f" (UID: "937199db-2864-42e7-bd7b-65315d94920f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.117950 4751 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/937199db-2864-42e7-bd7b-65315d94920f-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.117988 4751 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/937199db-2864-42e7-bd7b-65315d94920f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.125211 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/937199db-2864-42e7-bd7b-65315d94920f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "937199db-2864-42e7-bd7b-65315d94920f" (UID: "937199db-2864-42e7-bd7b-65315d94920f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.219049 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/937199db-2864-42e7-bd7b-65315d94920f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:32 crc kubenswrapper[4751]: E0130 21:18:32.512949 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="800ms" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.700772 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"937199db-2864-42e7-bd7b-65315d94920f","Type":"ContainerDied","Data":"f462afd414faa46099f0f64a6a8955052851c4a4930523036193d340eca901c4"} Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.701078 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f462afd414faa46099f0f64a6a8955052851c4a4930523036193d340eca901c4" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.700798 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.768891 4751 status_manager.go:851] "Failed to get status for pod" podUID="937199db-2864-42e7-bd7b-65315d94920f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.769350 4751 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.773623 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.774376 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.774803 4751 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.775308 4751 status_manager.go:851] "Failed to get status for pod" podUID="937199db-2864-42e7-bd7b-65315d94920f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.775727 4751 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.827678 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.827713 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.827735 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.827876 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.827898 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.827925 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.928491 4751 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.928806 4751 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.928896 4751 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:33 crc kubenswrapper[4751]: E0130 21:18:33.314382 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="1.6s" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.710285 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.711489 4751 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1" exitCode=0 Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.711557 4751 scope.go:117] "RemoveContainer" containerID="a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.711556 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.728858 4751 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.729253 4751 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.729847 4751 status_manager.go:851] "Failed to get status for pod" podUID="937199db-2864-42e7-bd7b-65315d94920f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.733267 4751 scope.go:117] "RemoveContainer" containerID="3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.749482 4751 scope.go:117] "RemoveContainer" containerID="b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.764283 4751 scope.go:117] "RemoveContainer" containerID="f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.781533 4751 scope.go:117] "RemoveContainer" containerID="06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.809472 4751 scope.go:117] "RemoveContainer" containerID="6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.834937 4751 scope.go:117] "RemoveContainer" containerID="a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d" Jan 30 21:18:33 crc kubenswrapper[4751]: E0130 21:18:33.835668 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\": container with ID starting with a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d not found: ID does not exist" containerID="a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.835741 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d"} err="failed to get container status \"a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\": rpc error: code = NotFound desc = could not find container \"a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\": container with ID starting with a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d not found: ID does not exist" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.835772 4751 scope.go:117] "RemoveContainer" containerID="3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b" Jan 30 21:18:33 crc kubenswrapper[4751]: E0130 21:18:33.836342 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\": container with ID starting with 3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b not found: ID does not exist" containerID="3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.836373 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b"} err="failed to get container status \"3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\": rpc error: code = NotFound desc = could not find container \"3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\": container with ID starting with 3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b not found: ID does not exist" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.836418 4751 scope.go:117] "RemoveContainer" containerID="b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9" Jan 30 21:18:33 crc kubenswrapper[4751]: E0130 21:18:33.836798 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\": container with ID starting with b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9 not found: ID does not exist" containerID="b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.836831 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9"} err="failed to get container status \"b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\": rpc error: code = NotFound desc = could not find container \"b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\": container with ID starting with b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9 not found: ID does not exist" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.836852 4751 scope.go:117] "RemoveContainer" containerID="f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1" Jan 30 21:18:33 crc kubenswrapper[4751]: E0130 21:18:33.837339 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\": container with ID starting with f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1 not found: ID does not exist" containerID="f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.837368 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1"} err="failed to get container status \"f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\": rpc error: code = NotFound desc = could not find container \"f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\": container with ID starting with f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1 not found: ID does not exist" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.837385 4751 scope.go:117] "RemoveContainer" containerID="06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1" Jan 30 21:18:33 crc kubenswrapper[4751]: E0130 21:18:33.837769 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\": container with ID starting with 06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1 not found: ID does not exist" containerID="06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.837798 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1"} err="failed to get container status \"06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\": rpc error: code = NotFound desc = could not find container \"06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\": container with ID starting with 06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1 not found: ID does not exist" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.837818 4751 scope.go:117] "RemoveContainer" containerID="6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac" Jan 30 21:18:33 crc kubenswrapper[4751]: E0130 21:18:33.838394 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\": container with ID starting with 6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac not found: ID does not exist" containerID="6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.838444 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac"} err="failed to get container status \"6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\": rpc error: code = NotFound desc = could not find container \"6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\": container with ID starting with 6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac not found: ID does not exist" Jan 30 21:18:33 crc kubenswrapper[4751]: E0130 21:18:33.844351 4751 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.39:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f9eeb030f560c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 21:18:30.785734156 +0000 UTC m=+249.531556845,LastTimestamp:2026-01-30 21:18:30.785734156 +0000 UTC m=+249.531556845,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.984069 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 30 21:18:34 crc kubenswrapper[4751]: E0130 21:18:34.038630 4751 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.39:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" volumeName="registry-storage" Jan 30 21:18:34 crc kubenswrapper[4751]: E0130 21:18:34.915558 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="3.2s" Jan 30 21:18:38 crc kubenswrapper[4751]: E0130 21:18:38.117786 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="6.4s" Jan 30 21:18:41 crc kubenswrapper[4751]: I0130 21:18:41.980966 4751 status_manager.go:851] "Failed to get status for pod" podUID="937199db-2864-42e7-bd7b-65315d94920f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:41 crc kubenswrapper[4751]: I0130 21:18:41.981970 4751 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:43 crc kubenswrapper[4751]: E0130 21:18:43.465169 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:18:43Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:18:43Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:18:43Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:18:43Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:43 crc kubenswrapper[4751]: E0130 21:18:43.466222 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:43 crc kubenswrapper[4751]: E0130 21:18:43.466750 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:43 crc kubenswrapper[4751]: E0130 21:18:43.467151 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:43 crc kubenswrapper[4751]: E0130 21:18:43.467644 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:43 crc kubenswrapper[4751]: E0130 21:18:43.467748 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:18:43 crc kubenswrapper[4751]: E0130 21:18:43.849678 4751 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.39:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f9eeb030f560c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 21:18:30.785734156 +0000 UTC m=+249.531556845,LastTimestamp:2026-01-30 21:18:30.785734156 +0000 UTC m=+249.531556845,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 21:18:44 crc kubenswrapper[4751]: E0130 21:18:44.519429 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="7s" Jan 30 21:18:45 crc kubenswrapper[4751]: I0130 21:18:45.804439 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 21:18:45 crc kubenswrapper[4751]: I0130 21:18:45.805428 4751 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817" exitCode=1 Jan 30 21:18:45 crc kubenswrapper[4751]: I0130 21:18:45.805604 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817"} Jan 30 21:18:45 crc kubenswrapper[4751]: I0130 21:18:45.807135 4751 status_manager.go:851] "Failed to get status for pod" podUID="937199db-2864-42e7-bd7b-65315d94920f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:45 crc kubenswrapper[4751]: I0130 21:18:45.807429 4751 scope.go:117] "RemoveContainer" containerID="1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817" Jan 30 21:18:45 crc kubenswrapper[4751]: I0130 21:18:45.807928 4751 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:45 crc kubenswrapper[4751]: I0130 21:18:45.808652 4751 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:45 crc kubenswrapper[4751]: I0130 21:18:45.974965 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:45 crc kubenswrapper[4751]: I0130 21:18:45.976315 4751 status_manager.go:851] "Failed to get status for pod" podUID="937199db-2864-42e7-bd7b-65315d94920f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:45 crc kubenswrapper[4751]: I0130 21:18:45.977027 4751 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:45 crc kubenswrapper[4751]: I0130 21:18:45.977929 4751 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.003425 4751 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="89512b2a-f57e-4242-9c12-8b8c660dc530" Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.003469 4751 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="89512b2a-f57e-4242-9c12-8b8c660dc530" Jan 30 21:18:46 crc kubenswrapper[4751]: E0130 21:18:46.003999 4751 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.004696 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:46 crc kubenswrapper[4751]: W0130 21:18:46.026485 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-a4f22507a6831e8bdf44cb0120e0cb15ef5ae89d8edd74b9ca8137c01723f323 WatchSource:0}: Error finding container a4f22507a6831e8bdf44cb0120e0cb15ef5ae89d8edd74b9ca8137c01723f323: Status 404 returned error can't find the container with id a4f22507a6831e8bdf44cb0120e0cb15ef5ae89d8edd74b9ca8137c01723f323 Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.817273 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.817653 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1496ba1e480a418e497f3254ba5327a20e6b8be7abe0b396c67111c2b65c5bd9"} Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.818896 4751 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.819766 4751 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.820316 4751 status_manager.go:851] "Failed to get status for pod" podUID="937199db-2864-42e7-bd7b-65315d94920f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.821492 4751 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="d99321ea5199b32b855c4fcaa1a3b37457fe01201f0df65df1111a4d4a66d348" exitCode=0 Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.821559 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"d99321ea5199b32b855c4fcaa1a3b37457fe01201f0df65df1111a4d4a66d348"} Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.821658 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a4f22507a6831e8bdf44cb0120e0cb15ef5ae89d8edd74b9ca8137c01723f323"} Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.822170 4751 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="89512b2a-f57e-4242-9c12-8b8c660dc530" Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.822213 4751 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="89512b2a-f57e-4242-9c12-8b8c660dc530" Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.822657 4751 status_manager.go:851] "Failed to get status for pod" podUID="937199db-2864-42e7-bd7b-65315d94920f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:46 crc kubenswrapper[4751]: E0130 21:18:46.823003 4751 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.823276 4751 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.823881 4751 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:47 crc kubenswrapper[4751]: I0130 21:18:47.856653 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c98f5864535909a1040c1af21dbb4bd51f1db489025cdf753cd7a8c6011df807"} Jan 30 21:18:48 crc kubenswrapper[4751]: I0130 21:18:48.869309 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"09c71adc517c1e66fa1618a0fb0e7caa43a685a2dcadd1e74b314230758f0db7"} Jan 30 21:18:48 crc kubenswrapper[4751]: I0130 21:18:48.870527 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5e7b269e0dd7975a09103c23ed8e54ece7b67064e81475a5b6f7916060d7a54c"} Jan 30 21:18:49 crc kubenswrapper[4751]: I0130 21:18:49.879393 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4a79523122416de0c1b2f9b74648052fc9efbf66b68ae6f5bd030f24e187ec95"} Jan 30 21:18:49 crc kubenswrapper[4751]: I0130 21:18:49.879714 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"080f907c4e15823a04b790dd2ccabbce01750bf18b95a319e6d5487004e32be6"} Jan 30 21:18:49 crc kubenswrapper[4751]: I0130 21:18:49.879762 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:49 crc kubenswrapper[4751]: I0130 21:18:49.879919 4751 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="89512b2a-f57e-4242-9c12-8b8c660dc530" Jan 30 21:18:49 crc kubenswrapper[4751]: I0130 21:18:49.879955 4751 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="89512b2a-f57e-4242-9c12-8b8c660dc530" Jan 30 21:18:50 crc kubenswrapper[4751]: I0130 21:18:50.451042 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:18:50 crc kubenswrapper[4751]: I0130 21:18:50.800411 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:18:50 crc kubenswrapper[4751]: I0130 21:18:50.806932 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:18:51 crc kubenswrapper[4751]: I0130 21:18:51.005556 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:51 crc kubenswrapper[4751]: I0130 21:18:51.006032 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:51 crc kubenswrapper[4751]: I0130 21:18:51.015129 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:54 crc kubenswrapper[4751]: I0130 21:18:54.892607 4751 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:54 crc kubenswrapper[4751]: I0130 21:18:54.917405 4751 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="89512b2a-f57e-4242-9c12-8b8c660dc530" Jan 30 21:18:54 crc kubenswrapper[4751]: I0130 21:18:54.917432 4751 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="89512b2a-f57e-4242-9c12-8b8c660dc530" Jan 30 21:18:54 crc kubenswrapper[4751]: I0130 21:18:54.923251 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:54 crc kubenswrapper[4751]: I0130 21:18:54.980778 4751 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f6c84dee-176a-4c58-8eea-812049fd208b" Jan 30 21:18:55 crc kubenswrapper[4751]: I0130 21:18:55.923372 4751 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="89512b2a-f57e-4242-9c12-8b8c660dc530" Jan 30 21:18:55 crc kubenswrapper[4751]: I0130 21:18:55.923419 4751 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="89512b2a-f57e-4242-9c12-8b8c660dc530" Jan 30 21:18:55 crc kubenswrapper[4751]: I0130 21:18:55.926938 4751 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f6c84dee-176a-4c58-8eea-812049fd208b" Jan 30 21:19:00 crc kubenswrapper[4751]: I0130 21:19:00.460521 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:19:03 crc kubenswrapper[4751]: I0130 21:19:03.696767 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 21:19:03 crc kubenswrapper[4751]: I0130 21:19:03.849784 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 21:19:04 crc kubenswrapper[4751]: I0130 21:19:04.685566 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 21:19:04 crc kubenswrapper[4751]: I0130 21:19:04.726481 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 21:19:04 crc kubenswrapper[4751]: I0130 21:19:04.734794 4751 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 21:19:04 crc kubenswrapper[4751]: I0130 21:19:04.879998 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 21:19:04 crc kubenswrapper[4751]: I0130 21:19:04.910252 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 21:19:05 crc kubenswrapper[4751]: I0130 21:19:05.167735 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 21:19:05 crc kubenswrapper[4751]: I0130 21:19:05.195542 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 21:19:05 crc kubenswrapper[4751]: I0130 21:19:05.220180 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 21:19:05 crc kubenswrapper[4751]: I0130 21:19:05.559413 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 21:19:05 crc kubenswrapper[4751]: I0130 21:19:05.710723 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 21:19:05 crc kubenswrapper[4751]: I0130 21:19:05.838983 4751 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 21:19:05 crc kubenswrapper[4751]: I0130 21:19:05.843256 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 21:19:05 crc kubenswrapper[4751]: I0130 21:19:05.850701 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 21:19:05 crc kubenswrapper[4751]: I0130 21:19:05.896194 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 21:19:05 crc kubenswrapper[4751]: I0130 21:19:05.906727 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 21:19:06 crc kubenswrapper[4751]: I0130 21:19:06.354428 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 21:19:06 crc kubenswrapper[4751]: I0130 21:19:06.648682 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 21:19:06 crc kubenswrapper[4751]: I0130 21:19:06.788087 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 21:19:06 crc kubenswrapper[4751]: I0130 21:19:06.903842 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 21:19:06 crc kubenswrapper[4751]: I0130 21:19:06.933419 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 21:19:06 crc kubenswrapper[4751]: I0130 21:19:06.952603 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.094751 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.152216 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.166839 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.234891 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.255540 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.329231 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.411853 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.516817 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.547195 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.751469 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.797479 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.843008 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.902737 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.931577 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.936545 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.951077 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.962582 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.001205 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.038628 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.077309 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.190650 4751 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.208185 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.258686 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.316060 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.400191 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.425716 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.482928 4751 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.486112 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=38.486096768 podStartE2EDuration="38.486096768s" podCreationTimestamp="2026-01-30 21:18:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:18:54.948466312 +0000 UTC m=+273.694289011" watchObservedRunningTime="2026-01-30 21:19:08.486096768 +0000 UTC m=+287.231919427" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.487789 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.487840 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.494234 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.516128 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.516094708 podStartE2EDuration="14.516094708s" podCreationTimestamp="2026-01-30 21:18:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:19:08.506462086 +0000 UTC m=+287.252284745" watchObservedRunningTime="2026-01-30 21:19:08.516094708 +0000 UTC m=+287.261917397" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.646876 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.690802 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.725782 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.737260 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.763032 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.824077 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.906410 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.995390 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.004314 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.030786 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.116942 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.212058 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.234440 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.255364 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.268044 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.277779 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.283050 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.391944 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.614739 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.662027 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.734904 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.806664 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.835187 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.840594 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.874933 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.882885 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.985605 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 21:19:10 crc kubenswrapper[4751]: I0130 21:19:10.000618 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 21:19:10 crc kubenswrapper[4751]: I0130 21:19:10.029219 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 21:19:10 crc kubenswrapper[4751]: I0130 21:19:10.069611 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 21:19:10 crc kubenswrapper[4751]: I0130 21:19:10.147019 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 21:19:10 crc kubenswrapper[4751]: I0130 21:19:10.241737 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 21:19:10 crc kubenswrapper[4751]: I0130 21:19:10.331291 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 21:19:10 crc kubenswrapper[4751]: I0130 21:19:10.506485 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 21:19:10 crc kubenswrapper[4751]: I0130 21:19:10.618164 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 21:19:10 crc kubenswrapper[4751]: I0130 21:19:10.626154 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 21:19:10 crc kubenswrapper[4751]: I0130 21:19:10.933774 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 21:19:10 crc kubenswrapper[4751]: I0130 21:19:10.946934 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.069507 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.111187 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.144483 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.235995 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.311110 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.481663 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.505437 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.521746 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.529179 4751 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.551401 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.570405 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.588635 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.597360 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.694817 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.696196 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.770385 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.798504 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.149411 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.326800 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.430086 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.466563 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.524309 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.555505 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.589735 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.660412 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.675871 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.679142 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.733671 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.759588 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.837389 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.842492 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.858744 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.894180 4751 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.921112 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.952814 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.001381 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.005270 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.047860 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.124003 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.263313 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.276505 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.319549 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.451034 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.551406 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.592237 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.595562 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.610222 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.658623 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.669123 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.744835 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.888801 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.921909 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.946115 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.023201 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.072126 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.157282 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.195861 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.201916 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.215546 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.272432 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.282228 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.407680 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.416140 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.661730 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.671148 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.687226 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.690133 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.767843 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.804474 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.848195 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.894478 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.009974 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.085684 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.103962 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.133045 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.263765 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.283118 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.301760 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.315683 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.318598 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.350762 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.357740 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.371440 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.377957 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.530299 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.609574 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.626482 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.673049 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.700408 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.735270 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.759160 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.853921 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.005027 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.010283 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.113778 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.114468 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.148997 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.195935 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.226387 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.233932 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.273435 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.315889 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.370202 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.379217 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.403812 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.420498 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.515801 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.527646 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.536588 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.699624 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.847976 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.889882 4751 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.890643 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://07daea61c26f8db733b7f11ca40decf928b78f0906e37dcb22d9a1f26b54e84b" gracePeriod=5 Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.900818 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.954490 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 21:19:17 crc kubenswrapper[4751]: I0130 21:19:17.061725 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 21:19:17 crc kubenswrapper[4751]: I0130 21:19:17.183097 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 21:19:17 crc kubenswrapper[4751]: I0130 21:19:17.356894 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 21:19:17 crc kubenswrapper[4751]: I0130 21:19:17.800422 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 21:19:17 crc kubenswrapper[4751]: I0130 21:19:17.817543 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 21:19:17 crc kubenswrapper[4751]: I0130 21:19:17.872505 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 21:19:17 crc kubenswrapper[4751]: I0130 21:19:17.906606 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 21:19:18 crc kubenswrapper[4751]: I0130 21:19:18.116892 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 21:19:18 crc kubenswrapper[4751]: I0130 21:19:18.180499 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 21:19:18 crc kubenswrapper[4751]: I0130 21:19:18.693463 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 21:19:18 crc kubenswrapper[4751]: I0130 21:19:18.748806 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 21:19:18 crc kubenswrapper[4751]: I0130 21:19:18.826450 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 21:19:18 crc kubenswrapper[4751]: I0130 21:19:18.858169 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 21:19:18 crc kubenswrapper[4751]: I0130 21:19:18.871518 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 21:19:18 crc kubenswrapper[4751]: I0130 21:19:18.883536 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 21:19:19 crc kubenswrapper[4751]: I0130 21:19:19.036973 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 21:19:19 crc kubenswrapper[4751]: I0130 21:19:19.229588 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 21:19:19 crc kubenswrapper[4751]: I0130 21:19:19.503455 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 21:19:19 crc kubenswrapper[4751]: I0130 21:19:19.512456 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 21:19:19 crc kubenswrapper[4751]: I0130 21:19:19.602135 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 21:19:19 crc kubenswrapper[4751]: I0130 21:19:19.688268 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 21:19:19 crc kubenswrapper[4751]: I0130 21:19:19.754749 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 21:19:20 crc kubenswrapper[4751]: I0130 21:19:20.030467 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 21:19:20 crc kubenswrapper[4751]: I0130 21:19:20.198518 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 21:19:20 crc kubenswrapper[4751]: I0130 21:19:20.446963 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 21:19:21 crc kubenswrapper[4751]: I0130 21:19:21.754581 4751 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.091751 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.091807 4751 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="07daea61c26f8db733b7f11ca40decf928b78f0906e37dcb22d9a1f26b54e84b" exitCode=137 Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.504863 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.505475 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.560260 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.560659 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.560820 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.561162 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.560968 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.561418 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.561519 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.561864 4751 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.561898 4751 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.561946 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.562253 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.572916 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.662590 4751 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.662990 4751 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.663009 4751 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 21:19:23 crc kubenswrapper[4751]: I0130 21:19:23.102885 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 21:19:23 crc kubenswrapper[4751]: I0130 21:19:23.102994 4751 scope.go:117] "RemoveContainer" containerID="07daea61c26f8db733b7f11ca40decf928b78f0906e37dcb22d9a1f26b54e84b" Jan 30 21:19:23 crc kubenswrapper[4751]: I0130 21:19:23.103076 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:19:23 crc kubenswrapper[4751]: I0130 21:19:23.986561 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 30 21:19:23 crc kubenswrapper[4751]: I0130 21:19:23.987017 4751 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 30 21:19:23 crc kubenswrapper[4751]: I0130 21:19:23.998449 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 21:19:23 crc kubenswrapper[4751]: I0130 21:19:23.998497 4751 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="f3ee0bd4-4fdb-4ee0-858f-860a616d1460" Jan 30 21:19:24 crc kubenswrapper[4751]: I0130 21:19:24.002392 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 21:19:24 crc kubenswrapper[4751]: I0130 21:19:24.002420 4751 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="f3ee0bd4-4fdb-4ee0-858f-860a616d1460" Jan 30 21:19:29 crc kubenswrapper[4751]: I0130 21:19:29.391993 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 21:19:30 crc kubenswrapper[4751]: I0130 21:19:30.894572 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 21:19:32 crc kubenswrapper[4751]: I0130 21:19:32.143186 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 21:19:33 crc kubenswrapper[4751]: I0130 21:19:33.166411 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 21:19:33 crc kubenswrapper[4751]: I0130 21:19:33.955666 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 21:19:37 crc kubenswrapper[4751]: I0130 21:19:37.861436 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 21:19:37 crc kubenswrapper[4751]: I0130 21:19:37.886672 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 21:19:37 crc kubenswrapper[4751]: I0130 21:19:37.928507 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 21:19:40 crc kubenswrapper[4751]: I0130 21:19:40.635423 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 21:19:42 crc kubenswrapper[4751]: I0130 21:19:42.286976 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 21:19:42 crc kubenswrapper[4751]: I0130 21:19:42.539959 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 21:19:44 crc kubenswrapper[4751]: I0130 21:19:44.087473 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 21:19:44 crc kubenswrapper[4751]: I0130 21:19:44.978081 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 21:19:45 crc kubenswrapper[4751]: I0130 21:19:45.416973 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 21:19:45 crc kubenswrapper[4751]: I0130 21:19:45.953195 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 21:19:47 crc kubenswrapper[4751]: I0130 21:19:47.554574 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 21:19:48 crc kubenswrapper[4751]: I0130 21:19:48.690462 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 21:19:49 crc kubenswrapper[4751]: I0130 21:19:49.776553 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 21:19:49 crc kubenswrapper[4751]: I0130 21:19:49.952077 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 21:19:50 crc kubenswrapper[4751]: I0130 21:19:50.003772 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 21:19:50 crc kubenswrapper[4751]: I0130 21:19:50.551775 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 21:19:51 crc kubenswrapper[4751]: I0130 21:19:51.626511 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 21:19:54 crc kubenswrapper[4751]: I0130 21:19:54.066201 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 21:19:54 crc kubenswrapper[4751]: I0130 21:19:54.126922 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:19:54 crc kubenswrapper[4751]: I0130 21:19:54.127001 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:19:55 crc kubenswrapper[4751]: I0130 21:19:55.337514 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 21:19:55 crc kubenswrapper[4751]: I0130 21:19:55.766798 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 21:20:02 crc kubenswrapper[4751]: I0130 21:20:02.646079 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 21:20:08 crc kubenswrapper[4751]: I0130 21:20:08.609165 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8jsqt"] Jan 30 21:20:08 crc kubenswrapper[4751]: I0130 21:20:08.610020 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" podUID="61e09136-e0d4-4c75-ad01-543778867411" containerName="controller-manager" containerID="cri-o://4df18ee24c522527074d638e05b39d9ec896a8a13159255abd65c9142157efc3" gracePeriod=30 Jan 30 21:20:08 crc kubenswrapper[4751]: I0130 21:20:08.618552 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp"] Jan 30 21:20:08 crc kubenswrapper[4751]: I0130 21:20:08.618988 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" podUID="322809f5-4f4c-487e-8488-6c62bac86f8f" containerName="route-controller-manager" containerID="cri-o://f941baa5ba0d3e32dd492e4e84997435c31d8bd5216b7dacf8d6df3060c1827b" gracePeriod=30 Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.006081 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.011673 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.045533 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtnfh\" (UniqueName: \"kubernetes.io/projected/61e09136-e0d4-4c75-ad01-543778867411-kube-api-access-wtnfh\") pod \"61e09136-e0d4-4c75-ad01-543778867411\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.045606 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61e09136-e0d4-4c75-ad01-543778867411-serving-cert\") pod \"61e09136-e0d4-4c75-ad01-543778867411\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.045650 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/322809f5-4f4c-487e-8488-6c62bac86f8f-serving-cert\") pod \"322809f5-4f4c-487e-8488-6c62bac86f8f\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.045681 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-client-ca\") pod \"61e09136-e0d4-4c75-ad01-543778867411\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.045704 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-config\") pod \"61e09136-e0d4-4c75-ad01-543778867411\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.045752 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/322809f5-4f4c-487e-8488-6c62bac86f8f-config\") pod \"322809f5-4f4c-487e-8488-6c62bac86f8f\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.045775 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-proxy-ca-bundles\") pod \"61e09136-e0d4-4c75-ad01-543778867411\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.045797 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/322809f5-4f4c-487e-8488-6c62bac86f8f-client-ca\") pod \"322809f5-4f4c-487e-8488-6c62bac86f8f\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.045835 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx729\" (UniqueName: \"kubernetes.io/projected/322809f5-4f4c-487e-8488-6c62bac86f8f-kube-api-access-kx729\") pod \"322809f5-4f4c-487e-8488-6c62bac86f8f\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.047351 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-client-ca" (OuterVolumeSpecName: "client-ca") pod "61e09136-e0d4-4c75-ad01-543778867411" (UID: "61e09136-e0d4-4c75-ad01-543778867411"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.050412 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/322809f5-4f4c-487e-8488-6c62bac86f8f-config" (OuterVolumeSpecName: "config") pod "322809f5-4f4c-487e-8488-6c62bac86f8f" (UID: "322809f5-4f4c-487e-8488-6c62bac86f8f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.053756 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-config" (OuterVolumeSpecName: "config") pod "61e09136-e0d4-4c75-ad01-543778867411" (UID: "61e09136-e0d4-4c75-ad01-543778867411"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.053947 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/322809f5-4f4c-487e-8488-6c62bac86f8f-client-ca" (OuterVolumeSpecName: "client-ca") pod "322809f5-4f4c-487e-8488-6c62bac86f8f" (UID: "322809f5-4f4c-487e-8488-6c62bac86f8f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.054663 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "61e09136-e0d4-4c75-ad01-543778867411" (UID: "61e09136-e0d4-4c75-ad01-543778867411"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.054725 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/322809f5-4f4c-487e-8488-6c62bac86f8f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "322809f5-4f4c-487e-8488-6c62bac86f8f" (UID: "322809f5-4f4c-487e-8488-6c62bac86f8f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.059273 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/322809f5-4f4c-487e-8488-6c62bac86f8f-kube-api-access-kx729" (OuterVolumeSpecName: "kube-api-access-kx729") pod "322809f5-4f4c-487e-8488-6c62bac86f8f" (UID: "322809f5-4f4c-487e-8488-6c62bac86f8f"). InnerVolumeSpecName "kube-api-access-kx729". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.061068 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61e09136-e0d4-4c75-ad01-543778867411-kube-api-access-wtnfh" (OuterVolumeSpecName: "kube-api-access-wtnfh") pod "61e09136-e0d4-4c75-ad01-543778867411" (UID: "61e09136-e0d4-4c75-ad01-543778867411"). InnerVolumeSpecName "kube-api-access-wtnfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.061289 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61e09136-e0d4-4c75-ad01-543778867411-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "61e09136-e0d4-4c75-ad01-543778867411" (UID: "61e09136-e0d4-4c75-ad01-543778867411"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.147357 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtnfh\" (UniqueName: \"kubernetes.io/projected/61e09136-e0d4-4c75-ad01-543778867411-kube-api-access-wtnfh\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.147421 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61e09136-e0d4-4c75-ad01-543778867411-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.147442 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/322809f5-4f4c-487e-8488-6c62bac86f8f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.147463 4751 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.147513 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.147531 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/322809f5-4f4c-487e-8488-6c62bac86f8f-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.147549 4751 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.147566 4751 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/322809f5-4f4c-487e-8488-6c62bac86f8f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.147583 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx729\" (UniqueName: \"kubernetes.io/projected/322809f5-4f4c-487e-8488-6c62bac86f8f-kube-api-access-kx729\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.415658 4751 generic.go:334] "Generic (PLEG): container finished" podID="322809f5-4f4c-487e-8488-6c62bac86f8f" containerID="f941baa5ba0d3e32dd492e4e84997435c31d8bd5216b7dacf8d6df3060c1827b" exitCode=0 Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.415740 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.415812 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" event={"ID":"322809f5-4f4c-487e-8488-6c62bac86f8f","Type":"ContainerDied","Data":"f941baa5ba0d3e32dd492e4e84997435c31d8bd5216b7dacf8d6df3060c1827b"} Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.415864 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" event={"ID":"322809f5-4f4c-487e-8488-6c62bac86f8f","Type":"ContainerDied","Data":"f7e0d553caf37cf1c65a97cae3829801333e8d0eb24ba3398a66bb00e08506f3"} Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.415902 4751 scope.go:117] "RemoveContainer" containerID="f941baa5ba0d3e32dd492e4e84997435c31d8bd5216b7dacf8d6df3060c1827b" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.419193 4751 generic.go:334] "Generic (PLEG): container finished" podID="61e09136-e0d4-4c75-ad01-543778867411" containerID="4df18ee24c522527074d638e05b39d9ec896a8a13159255abd65c9142157efc3" exitCode=0 Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.419239 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" event={"ID":"61e09136-e0d4-4c75-ad01-543778867411","Type":"ContainerDied","Data":"4df18ee24c522527074d638e05b39d9ec896a8a13159255abd65c9142157efc3"} Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.419291 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.419302 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" event={"ID":"61e09136-e0d4-4c75-ad01-543778867411","Type":"ContainerDied","Data":"bcc3e35fa7bf77d352470a19ce3b00e0ae26473ecc7d562f4aa3b014710b8b83"} Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.446641 4751 scope.go:117] "RemoveContainer" containerID="f941baa5ba0d3e32dd492e4e84997435c31d8bd5216b7dacf8d6df3060c1827b" Jan 30 21:20:09 crc kubenswrapper[4751]: E0130 21:20:09.447381 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f941baa5ba0d3e32dd492e4e84997435c31d8bd5216b7dacf8d6df3060c1827b\": container with ID starting with f941baa5ba0d3e32dd492e4e84997435c31d8bd5216b7dacf8d6df3060c1827b not found: ID does not exist" containerID="f941baa5ba0d3e32dd492e4e84997435c31d8bd5216b7dacf8d6df3060c1827b" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.447725 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f941baa5ba0d3e32dd492e4e84997435c31d8bd5216b7dacf8d6df3060c1827b"} err="failed to get container status \"f941baa5ba0d3e32dd492e4e84997435c31d8bd5216b7dacf8d6df3060c1827b\": rpc error: code = NotFound desc = could not find container \"f941baa5ba0d3e32dd492e4e84997435c31d8bd5216b7dacf8d6df3060c1827b\": container with ID starting with f941baa5ba0d3e32dd492e4e84997435c31d8bd5216b7dacf8d6df3060c1827b not found: ID does not exist" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.447771 4751 scope.go:117] "RemoveContainer" containerID="4df18ee24c522527074d638e05b39d9ec896a8a13159255abd65c9142157efc3" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.468596 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp"] Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.472367 4751 scope.go:117] "RemoveContainer" containerID="4df18ee24c522527074d638e05b39d9ec896a8a13159255abd65c9142157efc3" Jan 30 21:20:09 crc kubenswrapper[4751]: E0130 21:20:09.474482 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4df18ee24c522527074d638e05b39d9ec896a8a13159255abd65c9142157efc3\": container with ID starting with 4df18ee24c522527074d638e05b39d9ec896a8a13159255abd65c9142157efc3 not found: ID does not exist" containerID="4df18ee24c522527074d638e05b39d9ec896a8a13159255abd65c9142157efc3" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.474547 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4df18ee24c522527074d638e05b39d9ec896a8a13159255abd65c9142157efc3"} err="failed to get container status \"4df18ee24c522527074d638e05b39d9ec896a8a13159255abd65c9142157efc3\": rpc error: code = NotFound desc = could not find container \"4df18ee24c522527074d638e05b39d9ec896a8a13159255abd65c9142157efc3\": container with ID starting with 4df18ee24c522527074d638e05b39d9ec896a8a13159255abd65c9142157efc3 not found: ID does not exist" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.478933 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp"] Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.487365 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8jsqt"] Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.495549 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8jsqt"] Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.825584 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-67fbdd65b9-z58cx"] Jan 30 21:20:09 crc kubenswrapper[4751]: E0130 21:20:09.826044 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="937199db-2864-42e7-bd7b-65315d94920f" containerName="installer" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.826065 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="937199db-2864-42e7-bd7b-65315d94920f" containerName="installer" Jan 30 21:20:09 crc kubenswrapper[4751]: E0130 21:20:09.826124 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61e09136-e0d4-4c75-ad01-543778867411" containerName="controller-manager" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.826138 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="61e09136-e0d4-4c75-ad01-543778867411" containerName="controller-manager" Jan 30 21:20:09 crc kubenswrapper[4751]: E0130 21:20:09.826164 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.826216 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 21:20:09 crc kubenswrapper[4751]: E0130 21:20:09.826243 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="322809f5-4f4c-487e-8488-6c62bac86f8f" containerName="route-controller-manager" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.826256 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="322809f5-4f4c-487e-8488-6c62bac86f8f" containerName="route-controller-manager" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.826610 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="61e09136-e0d4-4c75-ad01-543778867411" containerName="controller-manager" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.826676 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.826693 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="937199db-2864-42e7-bd7b-65315d94920f" containerName="installer" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.826709 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="322809f5-4f4c-487e-8488-6c62bac86f8f" containerName="route-controller-manager" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.827482 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.829885 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.831844 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.832873 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.833776 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.834178 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.834428 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.837282 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67fbdd65b9-z58cx"] Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.844051 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.858313 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-proxy-ca-bundles\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.858925 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-config\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.859624 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56ca1bea-be0c-4187-85e8-33290c1ac419-serving-cert\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.860115 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r6dj\" (UniqueName: \"kubernetes.io/projected/56ca1bea-be0c-4187-85e8-33290c1ac419-kube-api-access-5r6dj\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.860637 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-client-ca\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.961992 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-proxy-ca-bundles\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.962083 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-config\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.962129 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56ca1bea-be0c-4187-85e8-33290c1ac419-serving-cert\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.962182 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r6dj\" (UniqueName: \"kubernetes.io/projected/56ca1bea-be0c-4187-85e8-33290c1ac419-kube-api-access-5r6dj\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.962241 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-client-ca\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.964617 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-config\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.964976 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-client-ca\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.965320 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-proxy-ca-bundles\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.971786 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56ca1bea-be0c-4187-85e8-33290c1ac419-serving-cert\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.988011 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="322809f5-4f4c-487e-8488-6c62bac86f8f" path="/var/lib/kubelet/pods/322809f5-4f4c-487e-8488-6c62bac86f8f/volumes" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.989203 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61e09136-e0d4-4c75-ad01-543778867411" path="/var/lib/kubelet/pods/61e09136-e0d4-4c75-ad01-543778867411/volumes" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.993088 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r6dj\" (UniqueName: \"kubernetes.io/projected/56ca1bea-be0c-4187-85e8-33290c1ac419-kube-api-access-5r6dj\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.162109 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.435221 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67fbdd65b9-z58cx"] Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.822254 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6"] Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.823254 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.830741 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.830973 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.831091 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.831119 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.831279 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.840209 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.851372 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6"] Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.876123 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6651fa2d-f596-4675-a425-f8baff64a3d6-config\") pod \"route-controller-manager-57b9dfd8bf-xwxm6\" (UID: \"6651fa2d-f596-4675-a425-f8baff64a3d6\") " pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.876189 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6651fa2d-f596-4675-a425-f8baff64a3d6-serving-cert\") pod \"route-controller-manager-57b9dfd8bf-xwxm6\" (UID: \"6651fa2d-f596-4675-a425-f8baff64a3d6\") " pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.876218 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6651fa2d-f596-4675-a425-f8baff64a3d6-client-ca\") pod \"route-controller-manager-57b9dfd8bf-xwxm6\" (UID: \"6651fa2d-f596-4675-a425-f8baff64a3d6\") " pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.876356 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h55rq\" (UniqueName: \"kubernetes.io/projected/6651fa2d-f596-4675-a425-f8baff64a3d6-kube-api-access-h55rq\") pod \"route-controller-manager-57b9dfd8bf-xwxm6\" (UID: \"6651fa2d-f596-4675-a425-f8baff64a3d6\") " pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.976887 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h55rq\" (UniqueName: \"kubernetes.io/projected/6651fa2d-f596-4675-a425-f8baff64a3d6-kube-api-access-h55rq\") pod \"route-controller-manager-57b9dfd8bf-xwxm6\" (UID: \"6651fa2d-f596-4675-a425-f8baff64a3d6\") " pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.976938 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6651fa2d-f596-4675-a425-f8baff64a3d6-config\") pod \"route-controller-manager-57b9dfd8bf-xwxm6\" (UID: \"6651fa2d-f596-4675-a425-f8baff64a3d6\") " pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.976979 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6651fa2d-f596-4675-a425-f8baff64a3d6-serving-cert\") pod \"route-controller-manager-57b9dfd8bf-xwxm6\" (UID: \"6651fa2d-f596-4675-a425-f8baff64a3d6\") " pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.976999 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6651fa2d-f596-4675-a425-f8baff64a3d6-client-ca\") pod \"route-controller-manager-57b9dfd8bf-xwxm6\" (UID: \"6651fa2d-f596-4675-a425-f8baff64a3d6\") " pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.977963 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6651fa2d-f596-4675-a425-f8baff64a3d6-client-ca\") pod \"route-controller-manager-57b9dfd8bf-xwxm6\" (UID: \"6651fa2d-f596-4675-a425-f8baff64a3d6\") " pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.978193 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6651fa2d-f596-4675-a425-f8baff64a3d6-config\") pod \"route-controller-manager-57b9dfd8bf-xwxm6\" (UID: \"6651fa2d-f596-4675-a425-f8baff64a3d6\") " pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.989705 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6651fa2d-f596-4675-a425-f8baff64a3d6-serving-cert\") pod \"route-controller-manager-57b9dfd8bf-xwxm6\" (UID: \"6651fa2d-f596-4675-a425-f8baff64a3d6\") " pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.994886 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h55rq\" (UniqueName: \"kubernetes.io/projected/6651fa2d-f596-4675-a425-f8baff64a3d6-kube-api-access-h55rq\") pod \"route-controller-manager-57b9dfd8bf-xwxm6\" (UID: \"6651fa2d-f596-4675-a425-f8baff64a3d6\") " pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:11 crc kubenswrapper[4751]: I0130 21:20:11.140586 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:11 crc kubenswrapper[4751]: I0130 21:20:11.472210 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" event={"ID":"56ca1bea-be0c-4187-85e8-33290c1ac419","Type":"ContainerStarted","Data":"727bd2b956937e646c881e73ea4cae6c00e8e56224d81dcf10b0bf0d1d5db9fe"} Jan 30 21:20:11 crc kubenswrapper[4751]: I0130 21:20:11.472586 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:11 crc kubenswrapper[4751]: I0130 21:20:11.472601 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" event={"ID":"56ca1bea-be0c-4187-85e8-33290c1ac419","Type":"ContainerStarted","Data":"c085dd6df8388c91766eaaa1380463bb804c12c891fe6f412357624bc97f5f67"} Jan 30 21:20:11 crc kubenswrapper[4751]: I0130 21:20:11.479459 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:11 crc kubenswrapper[4751]: I0130 21:20:11.521511 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" podStartSLOduration=3.521473476 podStartE2EDuration="3.521473476s" podCreationTimestamp="2026-01-30 21:20:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:20:11.501084629 +0000 UTC m=+350.246907308" watchObservedRunningTime="2026-01-30 21:20:11.521473476 +0000 UTC m=+350.267296155" Jan 30 21:20:11 crc kubenswrapper[4751]: I0130 21:20:11.567223 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6"] Jan 30 21:20:11 crc kubenswrapper[4751]: W0130 21:20:11.569511 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6651fa2d_f596_4675_a425_f8baff64a3d6.slice/crio-619922289cfcd974a4d7cfe3325a39ac3baf4d25fe4bbf3bef013feb10514180 WatchSource:0}: Error finding container 619922289cfcd974a4d7cfe3325a39ac3baf4d25fe4bbf3bef013feb10514180: Status 404 returned error can't find the container with id 619922289cfcd974a4d7cfe3325a39ac3baf4d25fe4bbf3bef013feb10514180 Jan 30 21:20:12 crc kubenswrapper[4751]: I0130 21:20:12.480471 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" event={"ID":"6651fa2d-f596-4675-a425-f8baff64a3d6","Type":"ContainerStarted","Data":"319c9c4919ac764c36070d567a3875051ca852e988cc204a844091e978a934ba"} Jan 30 21:20:12 crc kubenswrapper[4751]: I0130 21:20:12.482070 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" event={"ID":"6651fa2d-f596-4675-a425-f8baff64a3d6","Type":"ContainerStarted","Data":"619922289cfcd974a4d7cfe3325a39ac3baf4d25fe4bbf3bef013feb10514180"} Jan 30 21:20:12 crc kubenswrapper[4751]: I0130 21:20:12.501969 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" podStartSLOduration=4.501951473 podStartE2EDuration="4.501951473s" podCreationTimestamp="2026-01-30 21:20:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:20:12.501410858 +0000 UTC m=+351.247233507" watchObservedRunningTime="2026-01-30 21:20:12.501951473 +0000 UTC m=+351.247774132" Jan 30 21:20:13 crc kubenswrapper[4751]: I0130 21:20:13.487348 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:13 crc kubenswrapper[4751]: I0130 21:20:13.496111 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:24 crc kubenswrapper[4751]: I0130 21:20:24.126903 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:20:24 crc kubenswrapper[4751]: I0130 21:20:24.127492 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.443974 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-psfpp"] Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.445557 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.468257 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-psfpp"] Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.620599 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1963e246-4713-4682-8915-12bbc2f33d95-registry-tls\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.620651 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1963e246-4713-4682-8915-12bbc2f33d95-registry-certificates\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.620689 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1963e246-4713-4682-8915-12bbc2f33d95-installation-pull-secrets\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.620718 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1963e246-4713-4682-8915-12bbc2f33d95-ca-trust-extracted\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.620819 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wfb7\" (UniqueName: \"kubernetes.io/projected/1963e246-4713-4682-8915-12bbc2f33d95-kube-api-access-9wfb7\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.620921 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.621040 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1963e246-4713-4682-8915-12bbc2f33d95-trusted-ca\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.621120 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1963e246-4713-4682-8915-12bbc2f33d95-bound-sa-token\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.671658 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.722780 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1963e246-4713-4682-8915-12bbc2f33d95-bound-sa-token\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.722875 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1963e246-4713-4682-8915-12bbc2f33d95-registry-tls\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.722937 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1963e246-4713-4682-8915-12bbc2f33d95-registry-certificates\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.723017 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1963e246-4713-4682-8915-12bbc2f33d95-installation-pull-secrets\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.723068 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1963e246-4713-4682-8915-12bbc2f33d95-ca-trust-extracted\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.723112 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wfb7\" (UniqueName: \"kubernetes.io/projected/1963e246-4713-4682-8915-12bbc2f33d95-kube-api-access-9wfb7\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.723169 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1963e246-4713-4682-8915-12bbc2f33d95-trusted-ca\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.725120 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1963e246-4713-4682-8915-12bbc2f33d95-trusted-ca\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.729097 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1963e246-4713-4682-8915-12bbc2f33d95-registry-certificates\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.729175 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1963e246-4713-4682-8915-12bbc2f33d95-ca-trust-extracted\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.732940 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1963e246-4713-4682-8915-12bbc2f33d95-installation-pull-secrets\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.733442 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1963e246-4713-4682-8915-12bbc2f33d95-registry-tls\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.745037 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1963e246-4713-4682-8915-12bbc2f33d95-bound-sa-token\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.747182 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wfb7\" (UniqueName: \"kubernetes.io/projected/1963e246-4713-4682-8915-12bbc2f33d95-kube-api-access-9wfb7\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.766177 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:35 crc kubenswrapper[4751]: I0130 21:20:35.226492 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-psfpp"] Jan 30 21:20:35 crc kubenswrapper[4751]: I0130 21:20:35.665703 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" event={"ID":"1963e246-4713-4682-8915-12bbc2f33d95","Type":"ContainerStarted","Data":"f51852353928f67bedd9a60de000891ba690364ad64c236bb06988638548d5db"} Jan 30 21:20:35 crc kubenswrapper[4751]: I0130 21:20:35.665768 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" event={"ID":"1963e246-4713-4682-8915-12bbc2f33d95","Type":"ContainerStarted","Data":"a0f0d247ce14cb38f91bcc803c2567541a4495a9fc6dc4d5789752f66ecd05cb"} Jan 30 21:20:35 crc kubenswrapper[4751]: I0130 21:20:35.665919 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:35 crc kubenswrapper[4751]: I0130 21:20:35.692065 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" podStartSLOduration=1.692037077 podStartE2EDuration="1.692037077s" podCreationTimestamp="2026-01-30 21:20:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:20:35.691835571 +0000 UTC m=+374.437658250" watchObservedRunningTime="2026-01-30 21:20:35.692037077 +0000 UTC m=+374.437859776" Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.552757 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67fbdd65b9-z58cx"] Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.553665 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" podUID="56ca1bea-be0c-4187-85e8-33290c1ac419" containerName="controller-manager" containerID="cri-o://727bd2b956937e646c881e73ea4cae6c00e8e56224d81dcf10b0bf0d1d5db9fe" gracePeriod=30 Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.750742 4751 generic.go:334] "Generic (PLEG): container finished" podID="56ca1bea-be0c-4187-85e8-33290c1ac419" containerID="727bd2b956937e646c881e73ea4cae6c00e8e56224d81dcf10b0bf0d1d5db9fe" exitCode=0 Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.750794 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" event={"ID":"56ca1bea-be0c-4187-85e8-33290c1ac419","Type":"ContainerDied","Data":"727bd2b956937e646c881e73ea4cae6c00e8e56224d81dcf10b0bf0d1d5db9fe"} Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.948900 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.954671 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56ca1bea-be0c-4187-85e8-33290c1ac419-serving-cert\") pod \"56ca1bea-be0c-4187-85e8-33290c1ac419\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.954821 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r6dj\" (UniqueName: \"kubernetes.io/projected/56ca1bea-be0c-4187-85e8-33290c1ac419-kube-api-access-5r6dj\") pod \"56ca1bea-be0c-4187-85e8-33290c1ac419\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.954930 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-config\") pod \"56ca1bea-be0c-4187-85e8-33290c1ac419\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.955812 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-config" (OuterVolumeSpecName: "config") pod "56ca1bea-be0c-4187-85e8-33290c1ac419" (UID: "56ca1bea-be0c-4187-85e8-33290c1ac419"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.955920 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-client-ca\") pod \"56ca1bea-be0c-4187-85e8-33290c1ac419\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.956384 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-client-ca" (OuterVolumeSpecName: "client-ca") pod "56ca1bea-be0c-4187-85e8-33290c1ac419" (UID: "56ca1bea-be0c-4187-85e8-33290c1ac419"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.956477 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-proxy-ca-bundles\") pod \"56ca1bea-be0c-4187-85e8-33290c1ac419\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.957539 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "56ca1bea-be0c-4187-85e8-33290c1ac419" (UID: "56ca1bea-be0c-4187-85e8-33290c1ac419"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.957903 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.957933 4751 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.957950 4751 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.960510 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56ca1bea-be0c-4187-85e8-33290c1ac419-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "56ca1bea-be0c-4187-85e8-33290c1ac419" (UID: "56ca1bea-be0c-4187-85e8-33290c1ac419"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.960764 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56ca1bea-be0c-4187-85e8-33290c1ac419-kube-api-access-5r6dj" (OuterVolumeSpecName: "kube-api-access-5r6dj") pod "56ca1bea-be0c-4187-85e8-33290c1ac419" (UID: "56ca1bea-be0c-4187-85e8-33290c1ac419"). InnerVolumeSpecName "kube-api-access-5r6dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.058688 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56ca1bea-be0c-4187-85e8-33290c1ac419-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.058726 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r6dj\" (UniqueName: \"kubernetes.io/projected/56ca1bea-be0c-4187-85e8-33290c1ac419-kube-api-access-5r6dj\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.759285 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" event={"ID":"56ca1bea-be0c-4187-85e8-33290c1ac419","Type":"ContainerDied","Data":"c085dd6df8388c91766eaaa1380463bb804c12c891fe6f412357624bc97f5f67"} Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.759755 4751 scope.go:117] "RemoveContainer" containerID="727bd2b956937e646c881e73ea4cae6c00e8e56224d81dcf10b0bf0d1d5db9fe" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.759412 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.814570 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67fbdd65b9-z58cx"] Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.824842 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-67fbdd65b9-z58cx"] Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.858785 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-747dd9759b-6wjt2"] Jan 30 21:20:49 crc kubenswrapper[4751]: E0130 21:20:49.860766 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ca1bea-be0c-4187-85e8-33290c1ac419" containerName="controller-manager" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.860800 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ca1bea-be0c-4187-85e8-33290c1ac419" containerName="controller-manager" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.860969 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ca1bea-be0c-4187-85e8-33290c1ac419" containerName="controller-manager" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.861588 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.863938 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.868113 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.868296 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.868207 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.868442 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.868706 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.875266 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-747dd9759b-6wjt2"] Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.881958 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.971262 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbc546db-e9d3-40f3-9256-647759116f56-serving-cert\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.971784 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbc546db-e9d3-40f3-9256-647759116f56-config\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.971871 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fbc546db-e9d3-40f3-9256-647759116f56-proxy-ca-bundles\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.972080 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr5vm\" (UniqueName: \"kubernetes.io/projected/fbc546db-e9d3-40f3-9256-647759116f56-kube-api-access-gr5vm\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.972195 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fbc546db-e9d3-40f3-9256-647759116f56-client-ca\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.987184 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56ca1bea-be0c-4187-85e8-33290c1ac419" path="/var/lib/kubelet/pods/56ca1bea-be0c-4187-85e8-33290c1ac419/volumes" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.073751 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbc546db-e9d3-40f3-9256-647759116f56-serving-cert\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.073843 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbc546db-e9d3-40f3-9256-647759116f56-config\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.073915 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fbc546db-e9d3-40f3-9256-647759116f56-proxy-ca-bundles\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.073981 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr5vm\" (UniqueName: \"kubernetes.io/projected/fbc546db-e9d3-40f3-9256-647759116f56-kube-api-access-gr5vm\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.074031 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fbc546db-e9d3-40f3-9256-647759116f56-client-ca\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.075489 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fbc546db-e9d3-40f3-9256-647759116f56-proxy-ca-bundles\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.076856 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbc546db-e9d3-40f3-9256-647759116f56-config\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.077048 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fbc546db-e9d3-40f3-9256-647759116f56-client-ca\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.089083 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbc546db-e9d3-40f3-9256-647759116f56-serving-cert\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.104497 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr5vm\" (UniqueName: \"kubernetes.io/projected/fbc546db-e9d3-40f3-9256-647759116f56-kube-api-access-gr5vm\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.187397 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.401276 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-747dd9759b-6wjt2"] Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.768746 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" event={"ID":"fbc546db-e9d3-40f3-9256-647759116f56","Type":"ContainerStarted","Data":"c28347fb621a15c101af03728776f4e336c6f48b978982a3712d3d04acfbface"} Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.769167 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" event={"ID":"fbc546db-e9d3-40f3-9256-647759116f56","Type":"ContainerStarted","Data":"388c1b696957d2112af1748b75b308cbc4f1c5fc159d891f74bc60f4c863d67b"} Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.769190 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.778841 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.812935 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" podStartSLOduration=2.812913719 podStartE2EDuration="2.812913719s" podCreationTimestamp="2026-01-30 21:20:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:20:50.792556312 +0000 UTC m=+389.538378991" watchObservedRunningTime="2026-01-30 21:20:50.812913719 +0000 UTC m=+389.558736378" Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.720306 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-54ffx"] Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.721033 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-54ffx" podUID="a59ef52d-2f47-42ac-a233-0285be317cc9" containerName="registry-server" containerID="cri-o://1729cdfa83c5660b5e1741a763d71c952a65fa4fd1d132a64dc5d06c93fbbbb2" gracePeriod=30 Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.741214 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wvvq8"] Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.741802 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wvvq8" podUID="5de678c2-f43a-44fa-ab58-259f765c3e31" containerName="registry-server" containerID="cri-o://6374876bc7ed115e17f1c5d36bfa76b152a97c3a41fab6547007a48a90913470" gracePeriod=30 Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.753413 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tr6kv"] Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.753815 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" podUID="5243a1a5-2eaa-4437-b10e-602439c7c838" containerName="marketplace-operator" containerID="cri-o://b77d62c140b69b82d5cbf6eb5008135711a6580ea5f979e5e8815b4aa184e76b" gracePeriod=30 Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.765797 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6v829"] Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.766219 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6v829" podUID="94e03be5-809d-49ba-9318-6222131628f5" containerName="registry-server" containerID="cri-o://1ddc49d3ac552029a70cba19f836098840b08d811a4f18ddc5887959c1deeaf6" gracePeriod=30 Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.777954 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zct7w"] Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.778350 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zct7w" podUID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" containerName="registry-server" containerID="cri-o://6eccfc4d6ba2618226d304c5b2bd1fb297b15f5857edfd28a3313f58debc08a7" gracePeriod=30 Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.784116 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-76rml"] Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.785342 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.793557 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-76rml"] Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.826610 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnnj4\" (UniqueName: \"kubernetes.io/projected/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-kube-api-access-gnnj4\") pod \"marketplace-operator-79b997595-76rml\" (UID: \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\") " pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.826675 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-76rml\" (UID: \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\") " pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.826754 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-76rml\" (UID: \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\") " pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.927519 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-76rml\" (UID: \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\") " pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.927586 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-76rml\" (UID: \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\") " pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.927669 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnnj4\" (UniqueName: \"kubernetes.io/projected/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-kube-api-access-gnnj4\") pod \"marketplace-operator-79b997595-76rml\" (UID: \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\") " pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.929806 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-76rml\" (UID: \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\") " pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.949710 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-76rml\" (UID: \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\") " pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.950126 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnnj4\" (UniqueName: \"kubernetes.io/projected/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-kube-api-access-gnnj4\") pod \"marketplace-operator-79b997595-76rml\" (UID: \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\") " pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.208517 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.217884 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.218462 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.226784 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.232155 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.235521 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7665\" (UniqueName: \"kubernetes.io/projected/5243a1a5-2eaa-4437-b10e-602439c7c838-kube-api-access-b7665\") pod \"5243a1a5-2eaa-4437-b10e-602439c7c838\" (UID: \"5243a1a5-2eaa-4437-b10e-602439c7c838\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.235560 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5de678c2-f43a-44fa-ab58-259f765c3e31-catalog-content\") pod \"5de678c2-f43a-44fa-ab58-259f765c3e31\" (UID: \"5de678c2-f43a-44fa-ab58-259f765c3e31\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.235828 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5243a1a5-2eaa-4437-b10e-602439c7c838-marketplace-operator-metrics\") pod \"5243a1a5-2eaa-4437-b10e-602439c7c838\" (UID: \"5243a1a5-2eaa-4437-b10e-602439c7c838\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.235859 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5243a1a5-2eaa-4437-b10e-602439c7c838-marketplace-trusted-ca\") pod \"5243a1a5-2eaa-4437-b10e-602439c7c838\" (UID: \"5243a1a5-2eaa-4437-b10e-602439c7c838\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.235888 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5de678c2-f43a-44fa-ab58-259f765c3e31-utilities\") pod \"5de678c2-f43a-44fa-ab58-259f765c3e31\" (UID: \"5de678c2-f43a-44fa-ab58-259f765c3e31\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.235923 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tng6d\" (UniqueName: \"kubernetes.io/projected/a59ef52d-2f47-42ac-a233-0285be317cc9-kube-api-access-tng6d\") pod \"a59ef52d-2f47-42ac-a233-0285be317cc9\" (UID: \"a59ef52d-2f47-42ac-a233-0285be317cc9\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.235941 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh7pr\" (UniqueName: \"kubernetes.io/projected/5de678c2-f43a-44fa-ab58-259f765c3e31-kube-api-access-dh7pr\") pod \"5de678c2-f43a-44fa-ab58-259f765c3e31\" (UID: \"5de678c2-f43a-44fa-ab58-259f765c3e31\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.235966 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a59ef52d-2f47-42ac-a233-0285be317cc9-catalog-content\") pod \"a59ef52d-2f47-42ac-a233-0285be317cc9\" (UID: \"a59ef52d-2f47-42ac-a233-0285be317cc9\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.235995 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a59ef52d-2f47-42ac-a233-0285be317cc9-utilities\") pod \"a59ef52d-2f47-42ac-a233-0285be317cc9\" (UID: \"a59ef52d-2f47-42ac-a233-0285be317cc9\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.240971 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a59ef52d-2f47-42ac-a233-0285be317cc9-utilities" (OuterVolumeSpecName: "utilities") pod "a59ef52d-2f47-42ac-a233-0285be317cc9" (UID: "a59ef52d-2f47-42ac-a233-0285be317cc9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.241808 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5243a1a5-2eaa-4437-b10e-602439c7c838-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "5243a1a5-2eaa-4437-b10e-602439c7c838" (UID: "5243a1a5-2eaa-4437-b10e-602439c7c838"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.245177 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5243a1a5-2eaa-4437-b10e-602439c7c838-kube-api-access-b7665" (OuterVolumeSpecName: "kube-api-access-b7665") pod "5243a1a5-2eaa-4437-b10e-602439c7c838" (UID: "5243a1a5-2eaa-4437-b10e-602439c7c838"). InnerVolumeSpecName "kube-api-access-b7665". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.245233 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5243a1a5-2eaa-4437-b10e-602439c7c838-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "5243a1a5-2eaa-4437-b10e-602439c7c838" (UID: "5243a1a5-2eaa-4437-b10e-602439c7c838"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.245302 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5de678c2-f43a-44fa-ab58-259f765c3e31-utilities" (OuterVolumeSpecName: "utilities") pod "5de678c2-f43a-44fa-ab58-259f765c3e31" (UID: "5de678c2-f43a-44fa-ab58-259f765c3e31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.245576 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a59ef52d-2f47-42ac-a233-0285be317cc9-kube-api-access-tng6d" (OuterVolumeSpecName: "kube-api-access-tng6d") pod "a59ef52d-2f47-42ac-a233-0285be317cc9" (UID: "a59ef52d-2f47-42ac-a233-0285be317cc9"). InnerVolumeSpecName "kube-api-access-tng6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.247546 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5de678c2-f43a-44fa-ab58-259f765c3e31-kube-api-access-dh7pr" (OuterVolumeSpecName: "kube-api-access-dh7pr") pod "5de678c2-f43a-44fa-ab58-259f765c3e31" (UID: "5de678c2-f43a-44fa-ab58-259f765c3e31"). InnerVolumeSpecName "kube-api-access-dh7pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.291892 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.311235 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5de678c2-f43a-44fa-ab58-259f765c3e31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5de678c2-f43a-44fa-ab58-259f765c3e31" (UID: "5de678c2-f43a-44fa-ab58-259f765c3e31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.337447 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-catalog-content\") pod \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\" (UID: \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.337565 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm6bs\" (UniqueName: \"kubernetes.io/projected/94e03be5-809d-49ba-9318-6222131628f5-kube-api-access-sm6bs\") pod \"94e03be5-809d-49ba-9318-6222131628f5\" (UID: \"94e03be5-809d-49ba-9318-6222131628f5\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.337591 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97l8g\" (UniqueName: \"kubernetes.io/projected/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-kube-api-access-97l8g\") pod \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\" (UID: \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.337650 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94e03be5-809d-49ba-9318-6222131628f5-catalog-content\") pod \"94e03be5-809d-49ba-9318-6222131628f5\" (UID: \"94e03be5-809d-49ba-9318-6222131628f5\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.337671 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94e03be5-809d-49ba-9318-6222131628f5-utilities\") pod \"94e03be5-809d-49ba-9318-6222131628f5\" (UID: \"94e03be5-809d-49ba-9318-6222131628f5\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.337690 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-utilities\") pod \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\" (UID: \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.337962 4751 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5243a1a5-2eaa-4437-b10e-602439c7c838-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.337977 4751 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5243a1a5-2eaa-4437-b10e-602439c7c838-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.337987 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5de678c2-f43a-44fa-ab58-259f765c3e31-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.337998 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tng6d\" (UniqueName: \"kubernetes.io/projected/a59ef52d-2f47-42ac-a233-0285be317cc9-kube-api-access-tng6d\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.338010 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh7pr\" (UniqueName: \"kubernetes.io/projected/5de678c2-f43a-44fa-ab58-259f765c3e31-kube-api-access-dh7pr\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.338022 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a59ef52d-2f47-42ac-a233-0285be317cc9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.338033 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7665\" (UniqueName: \"kubernetes.io/projected/5243a1a5-2eaa-4437-b10e-602439c7c838-kube-api-access-b7665\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.338044 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5de678c2-f43a-44fa-ab58-259f765c3e31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.338888 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-utilities" (OuterVolumeSpecName: "utilities") pod "b05ec0ea-cf7e-46ce-9814-a4597ebcf238" (UID: "b05ec0ea-cf7e-46ce-9814-a4597ebcf238"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.339832 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94e03be5-809d-49ba-9318-6222131628f5-utilities" (OuterVolumeSpecName: "utilities") pod "94e03be5-809d-49ba-9318-6222131628f5" (UID: "94e03be5-809d-49ba-9318-6222131628f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.341514 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-kube-api-access-97l8g" (OuterVolumeSpecName: "kube-api-access-97l8g") pod "b05ec0ea-cf7e-46ce-9814-a4597ebcf238" (UID: "b05ec0ea-cf7e-46ce-9814-a4597ebcf238"). InnerVolumeSpecName "kube-api-access-97l8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.342134 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94e03be5-809d-49ba-9318-6222131628f5-kube-api-access-sm6bs" (OuterVolumeSpecName: "kube-api-access-sm6bs") pod "94e03be5-809d-49ba-9318-6222131628f5" (UID: "94e03be5-809d-49ba-9318-6222131628f5"). InnerVolumeSpecName "kube-api-access-sm6bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.372801 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94e03be5-809d-49ba-9318-6222131628f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94e03be5-809d-49ba-9318-6222131628f5" (UID: "94e03be5-809d-49ba-9318-6222131628f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.415817 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a59ef52d-2f47-42ac-a233-0285be317cc9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a59ef52d-2f47-42ac-a233-0285be317cc9" (UID: "a59ef52d-2f47-42ac-a233-0285be317cc9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.439343 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a59ef52d-2f47-42ac-a233-0285be317cc9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.439375 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm6bs\" (UniqueName: \"kubernetes.io/projected/94e03be5-809d-49ba-9318-6222131628f5-kube-api-access-sm6bs\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.439387 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97l8g\" (UniqueName: \"kubernetes.io/projected/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-kube-api-access-97l8g\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.439394 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94e03be5-809d-49ba-9318-6222131628f5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.439403 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94e03be5-809d-49ba-9318-6222131628f5-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.439410 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.471126 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b05ec0ea-cf7e-46ce-9814-a4597ebcf238" (UID: "b05ec0ea-cf7e-46ce-9814-a4597ebcf238"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.540352 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.663682 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-76rml"] Jan 30 21:20:53 crc kubenswrapper[4751]: W0130 21:20:53.663770 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdcb33b0_97a6_4ded_96b6_1c5bd9053977.slice/crio-644b46c2a2fab923799c15a7a1cf7953e3083f14e5f91214c52144072cb6a7fb WatchSource:0}: Error finding container 644b46c2a2fab923799c15a7a1cf7953e3083f14e5f91214c52144072cb6a7fb: Status 404 returned error can't find the container with id 644b46c2a2fab923799c15a7a1cf7953e3083f14e5f91214c52144072cb6a7fb Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.793057 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-76rml" event={"ID":"cdcb33b0-97a6-4ded-96b6-1c5bd9053977","Type":"ContainerStarted","Data":"644b46c2a2fab923799c15a7a1cf7953e3083f14e5f91214c52144072cb6a7fb"} Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.798938 4751 generic.go:334] "Generic (PLEG): container finished" podID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" containerID="6eccfc4d6ba2618226d304c5b2bd1fb297b15f5857edfd28a3313f58debc08a7" exitCode=0 Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.798992 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zct7w" event={"ID":"b05ec0ea-cf7e-46ce-9814-a4597ebcf238","Type":"ContainerDied","Data":"6eccfc4d6ba2618226d304c5b2bd1fb297b15f5857edfd28a3313f58debc08a7"} Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.799022 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zct7w" event={"ID":"b05ec0ea-cf7e-46ce-9814-a4597ebcf238","Type":"ContainerDied","Data":"804ecfb30bc123f3020417772e2716aa7215e9f0bbcc895b3845fd67eade69b4"} Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.799038 4751 scope.go:117] "RemoveContainer" containerID="6eccfc4d6ba2618226d304c5b2bd1fb297b15f5857edfd28a3313f58debc08a7" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.799151 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.804041 4751 generic.go:334] "Generic (PLEG): container finished" podID="5243a1a5-2eaa-4437-b10e-602439c7c838" containerID="b77d62c140b69b82d5cbf6eb5008135711a6580ea5f979e5e8815b4aa184e76b" exitCode=0 Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.804114 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.804132 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" event={"ID":"5243a1a5-2eaa-4437-b10e-602439c7c838","Type":"ContainerDied","Data":"b77d62c140b69b82d5cbf6eb5008135711a6580ea5f979e5e8815b4aa184e76b"} Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.804160 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" event={"ID":"5243a1a5-2eaa-4437-b10e-602439c7c838","Type":"ContainerDied","Data":"645fc7ebe618428269447cd8603adff67691b64d1f9d9c2663bb2b21ba6d290d"} Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.808122 4751 generic.go:334] "Generic (PLEG): container finished" podID="a59ef52d-2f47-42ac-a233-0285be317cc9" containerID="1729cdfa83c5660b5e1741a763d71c952a65fa4fd1d132a64dc5d06c93fbbbb2" exitCode=0 Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.808187 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54ffx" event={"ID":"a59ef52d-2f47-42ac-a233-0285be317cc9","Type":"ContainerDied","Data":"1729cdfa83c5660b5e1741a763d71c952a65fa4fd1d132a64dc5d06c93fbbbb2"} Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.808215 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54ffx" event={"ID":"a59ef52d-2f47-42ac-a233-0285be317cc9","Type":"ContainerDied","Data":"3de6594576878279730bf6ad7c0a39ba28b9c63e62d19e6f38aaeefbede04797"} Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.808282 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.816078 4751 generic.go:334] "Generic (PLEG): container finished" podID="94e03be5-809d-49ba-9318-6222131628f5" containerID="1ddc49d3ac552029a70cba19f836098840b08d811a4f18ddc5887959c1deeaf6" exitCode=0 Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.816155 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.816159 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6v829" event={"ID":"94e03be5-809d-49ba-9318-6222131628f5","Type":"ContainerDied","Data":"1ddc49d3ac552029a70cba19f836098840b08d811a4f18ddc5887959c1deeaf6"} Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.816269 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6v829" event={"ID":"94e03be5-809d-49ba-9318-6222131628f5","Type":"ContainerDied","Data":"960e022b4f8bb566d2fdbe8e623c147ebba25b0f4a883e6013345ce05433bda9"} Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.820222 4751 generic.go:334] "Generic (PLEG): container finished" podID="5de678c2-f43a-44fa-ab58-259f765c3e31" containerID="6374876bc7ed115e17f1c5d36bfa76b152a97c3a41fab6547007a48a90913470" exitCode=0 Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.820262 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvvq8" event={"ID":"5de678c2-f43a-44fa-ab58-259f765c3e31","Type":"ContainerDied","Data":"6374876bc7ed115e17f1c5d36bfa76b152a97c3a41fab6547007a48a90913470"} Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.820292 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvvq8" event={"ID":"5de678c2-f43a-44fa-ab58-259f765c3e31","Type":"ContainerDied","Data":"25d69c268722a1234878b44da4db4eac47a853d184bfae913c7a2d4ea1ad28d3"} Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.820374 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.822112 4751 scope.go:117] "RemoveContainer" containerID="f04d948c4bd5fbf97c9dbf36276c922630c269da2556ed764c208461a423cfb1" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.843306 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zct7w"] Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.853990 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zct7w"] Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.863587 4751 scope.go:117] "RemoveContainer" containerID="3559a530c75f8ff68a5cad975d02e6fac3ea4f198ad887abab6fb34ab0d38642" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.863688 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tr6kv"] Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.868836 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tr6kv"] Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.874042 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wvvq8"] Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.878884 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wvvq8"] Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.890919 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6v829"] Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.890976 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6v829"] Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.896395 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-54ffx"] Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.900798 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-54ffx"] Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.926466 4751 scope.go:117] "RemoveContainer" containerID="6eccfc4d6ba2618226d304c5b2bd1fb297b15f5857edfd28a3313f58debc08a7" Jan 30 21:20:53 crc kubenswrapper[4751]: E0130 21:20:53.928522 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6eccfc4d6ba2618226d304c5b2bd1fb297b15f5857edfd28a3313f58debc08a7\": container with ID starting with 6eccfc4d6ba2618226d304c5b2bd1fb297b15f5857edfd28a3313f58debc08a7 not found: ID does not exist" containerID="6eccfc4d6ba2618226d304c5b2bd1fb297b15f5857edfd28a3313f58debc08a7" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.928586 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6eccfc4d6ba2618226d304c5b2bd1fb297b15f5857edfd28a3313f58debc08a7"} err="failed to get container status \"6eccfc4d6ba2618226d304c5b2bd1fb297b15f5857edfd28a3313f58debc08a7\": rpc error: code = NotFound desc = could not find container \"6eccfc4d6ba2618226d304c5b2bd1fb297b15f5857edfd28a3313f58debc08a7\": container with ID starting with 6eccfc4d6ba2618226d304c5b2bd1fb297b15f5857edfd28a3313f58debc08a7 not found: ID does not exist" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.928613 4751 scope.go:117] "RemoveContainer" containerID="f04d948c4bd5fbf97c9dbf36276c922630c269da2556ed764c208461a423cfb1" Jan 30 21:20:53 crc kubenswrapper[4751]: E0130 21:20:53.928935 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f04d948c4bd5fbf97c9dbf36276c922630c269da2556ed764c208461a423cfb1\": container with ID starting with f04d948c4bd5fbf97c9dbf36276c922630c269da2556ed764c208461a423cfb1 not found: ID does not exist" containerID="f04d948c4bd5fbf97c9dbf36276c922630c269da2556ed764c208461a423cfb1" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.928971 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f04d948c4bd5fbf97c9dbf36276c922630c269da2556ed764c208461a423cfb1"} err="failed to get container status \"f04d948c4bd5fbf97c9dbf36276c922630c269da2556ed764c208461a423cfb1\": rpc error: code = NotFound desc = could not find container \"f04d948c4bd5fbf97c9dbf36276c922630c269da2556ed764c208461a423cfb1\": container with ID starting with f04d948c4bd5fbf97c9dbf36276c922630c269da2556ed764c208461a423cfb1 not found: ID does not exist" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.928984 4751 scope.go:117] "RemoveContainer" containerID="3559a530c75f8ff68a5cad975d02e6fac3ea4f198ad887abab6fb34ab0d38642" Jan 30 21:20:53 crc kubenswrapper[4751]: E0130 21:20:53.929217 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3559a530c75f8ff68a5cad975d02e6fac3ea4f198ad887abab6fb34ab0d38642\": container with ID starting with 3559a530c75f8ff68a5cad975d02e6fac3ea4f198ad887abab6fb34ab0d38642 not found: ID does not exist" containerID="3559a530c75f8ff68a5cad975d02e6fac3ea4f198ad887abab6fb34ab0d38642" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.929243 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3559a530c75f8ff68a5cad975d02e6fac3ea4f198ad887abab6fb34ab0d38642"} err="failed to get container status \"3559a530c75f8ff68a5cad975d02e6fac3ea4f198ad887abab6fb34ab0d38642\": rpc error: code = NotFound desc = could not find container \"3559a530c75f8ff68a5cad975d02e6fac3ea4f198ad887abab6fb34ab0d38642\": container with ID starting with 3559a530c75f8ff68a5cad975d02e6fac3ea4f198ad887abab6fb34ab0d38642 not found: ID does not exist" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.929259 4751 scope.go:117] "RemoveContainer" containerID="b77d62c140b69b82d5cbf6eb5008135711a6580ea5f979e5e8815b4aa184e76b" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.942482 4751 scope.go:117] "RemoveContainer" containerID="b77d62c140b69b82d5cbf6eb5008135711a6580ea5f979e5e8815b4aa184e76b" Jan 30 21:20:53 crc kubenswrapper[4751]: E0130 21:20:53.942930 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b77d62c140b69b82d5cbf6eb5008135711a6580ea5f979e5e8815b4aa184e76b\": container with ID starting with b77d62c140b69b82d5cbf6eb5008135711a6580ea5f979e5e8815b4aa184e76b not found: ID does not exist" containerID="b77d62c140b69b82d5cbf6eb5008135711a6580ea5f979e5e8815b4aa184e76b" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.942956 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b77d62c140b69b82d5cbf6eb5008135711a6580ea5f979e5e8815b4aa184e76b"} err="failed to get container status \"b77d62c140b69b82d5cbf6eb5008135711a6580ea5f979e5e8815b4aa184e76b\": rpc error: code = NotFound desc = could not find container \"b77d62c140b69b82d5cbf6eb5008135711a6580ea5f979e5e8815b4aa184e76b\": container with ID starting with b77d62c140b69b82d5cbf6eb5008135711a6580ea5f979e5e8815b4aa184e76b not found: ID does not exist" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.942972 4751 scope.go:117] "RemoveContainer" containerID="1729cdfa83c5660b5e1741a763d71c952a65fa4fd1d132a64dc5d06c93fbbbb2" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.958531 4751 scope.go:117] "RemoveContainer" containerID="1ce35f5828f7898b6d403ab0ee2c2611372e65524c25b1e093fe6f0ff286146b" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.977970 4751 scope.go:117] "RemoveContainer" containerID="59d0d5accbd0e741a45d2c9f0929c8ef4dea19acf5f22156ad129ee6a57b7170" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.985136 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5243a1a5-2eaa-4437-b10e-602439c7c838" path="/var/lib/kubelet/pods/5243a1a5-2eaa-4437-b10e-602439c7c838/volumes" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.985692 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5de678c2-f43a-44fa-ab58-259f765c3e31" path="/var/lib/kubelet/pods/5de678c2-f43a-44fa-ab58-259f765c3e31/volumes" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.986410 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94e03be5-809d-49ba-9318-6222131628f5" path="/var/lib/kubelet/pods/94e03be5-809d-49ba-9318-6222131628f5/volumes" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.987514 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a59ef52d-2f47-42ac-a233-0285be317cc9" path="/var/lib/kubelet/pods/a59ef52d-2f47-42ac-a233-0285be317cc9/volumes" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.988178 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" path="/var/lib/kubelet/pods/b05ec0ea-cf7e-46ce-9814-a4597ebcf238/volumes" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.997126 4751 scope.go:117] "RemoveContainer" containerID="1729cdfa83c5660b5e1741a763d71c952a65fa4fd1d132a64dc5d06c93fbbbb2" Jan 30 21:20:53 crc kubenswrapper[4751]: E0130 21:20:53.997471 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1729cdfa83c5660b5e1741a763d71c952a65fa4fd1d132a64dc5d06c93fbbbb2\": container with ID starting with 1729cdfa83c5660b5e1741a763d71c952a65fa4fd1d132a64dc5d06c93fbbbb2 not found: ID does not exist" containerID="1729cdfa83c5660b5e1741a763d71c952a65fa4fd1d132a64dc5d06c93fbbbb2" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.997710 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1729cdfa83c5660b5e1741a763d71c952a65fa4fd1d132a64dc5d06c93fbbbb2"} err="failed to get container status \"1729cdfa83c5660b5e1741a763d71c952a65fa4fd1d132a64dc5d06c93fbbbb2\": rpc error: code = NotFound desc = could not find container \"1729cdfa83c5660b5e1741a763d71c952a65fa4fd1d132a64dc5d06c93fbbbb2\": container with ID starting with 1729cdfa83c5660b5e1741a763d71c952a65fa4fd1d132a64dc5d06c93fbbbb2 not found: ID does not exist" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.997747 4751 scope.go:117] "RemoveContainer" containerID="1ce35f5828f7898b6d403ab0ee2c2611372e65524c25b1e093fe6f0ff286146b" Jan 30 21:20:53 crc kubenswrapper[4751]: E0130 21:20:53.997982 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ce35f5828f7898b6d403ab0ee2c2611372e65524c25b1e093fe6f0ff286146b\": container with ID starting with 1ce35f5828f7898b6d403ab0ee2c2611372e65524c25b1e093fe6f0ff286146b not found: ID does not exist" containerID="1ce35f5828f7898b6d403ab0ee2c2611372e65524c25b1e093fe6f0ff286146b" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.998007 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ce35f5828f7898b6d403ab0ee2c2611372e65524c25b1e093fe6f0ff286146b"} err="failed to get container status \"1ce35f5828f7898b6d403ab0ee2c2611372e65524c25b1e093fe6f0ff286146b\": rpc error: code = NotFound desc = could not find container \"1ce35f5828f7898b6d403ab0ee2c2611372e65524c25b1e093fe6f0ff286146b\": container with ID starting with 1ce35f5828f7898b6d403ab0ee2c2611372e65524c25b1e093fe6f0ff286146b not found: ID does not exist" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.998026 4751 scope.go:117] "RemoveContainer" containerID="59d0d5accbd0e741a45d2c9f0929c8ef4dea19acf5f22156ad129ee6a57b7170" Jan 30 21:20:53 crc kubenswrapper[4751]: E0130 21:20:53.998178 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59d0d5accbd0e741a45d2c9f0929c8ef4dea19acf5f22156ad129ee6a57b7170\": container with ID starting with 59d0d5accbd0e741a45d2c9f0929c8ef4dea19acf5f22156ad129ee6a57b7170 not found: ID does not exist" containerID="59d0d5accbd0e741a45d2c9f0929c8ef4dea19acf5f22156ad129ee6a57b7170" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.998197 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59d0d5accbd0e741a45d2c9f0929c8ef4dea19acf5f22156ad129ee6a57b7170"} err="failed to get container status \"59d0d5accbd0e741a45d2c9f0929c8ef4dea19acf5f22156ad129ee6a57b7170\": rpc error: code = NotFound desc = could not find container \"59d0d5accbd0e741a45d2c9f0929c8ef4dea19acf5f22156ad129ee6a57b7170\": container with ID starting with 59d0d5accbd0e741a45d2c9f0929c8ef4dea19acf5f22156ad129ee6a57b7170 not found: ID does not exist" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.998209 4751 scope.go:117] "RemoveContainer" containerID="1ddc49d3ac552029a70cba19f836098840b08d811a4f18ddc5887959c1deeaf6" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.021317 4751 scope.go:117] "RemoveContainer" containerID="cc10611800a185029b7504531f1aef239f78d89a7c893703ac97a90606882855" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.038402 4751 scope.go:117] "RemoveContainer" containerID="2e2bfeb85bee453c5562087b8952080d63fb468a903b18dd2f8152c589c7b24e" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.062076 4751 scope.go:117] "RemoveContainer" containerID="1ddc49d3ac552029a70cba19f836098840b08d811a4f18ddc5887959c1deeaf6" Jan 30 21:20:54 crc kubenswrapper[4751]: E0130 21:20:54.062600 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ddc49d3ac552029a70cba19f836098840b08d811a4f18ddc5887959c1deeaf6\": container with ID starting with 1ddc49d3ac552029a70cba19f836098840b08d811a4f18ddc5887959c1deeaf6 not found: ID does not exist" containerID="1ddc49d3ac552029a70cba19f836098840b08d811a4f18ddc5887959c1deeaf6" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.062661 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ddc49d3ac552029a70cba19f836098840b08d811a4f18ddc5887959c1deeaf6"} err="failed to get container status \"1ddc49d3ac552029a70cba19f836098840b08d811a4f18ddc5887959c1deeaf6\": rpc error: code = NotFound desc = could not find container \"1ddc49d3ac552029a70cba19f836098840b08d811a4f18ddc5887959c1deeaf6\": container with ID starting with 1ddc49d3ac552029a70cba19f836098840b08d811a4f18ddc5887959c1deeaf6 not found: ID does not exist" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.062690 4751 scope.go:117] "RemoveContainer" containerID="cc10611800a185029b7504531f1aef239f78d89a7c893703ac97a90606882855" Jan 30 21:20:54 crc kubenswrapper[4751]: E0130 21:20:54.062997 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc10611800a185029b7504531f1aef239f78d89a7c893703ac97a90606882855\": container with ID starting with cc10611800a185029b7504531f1aef239f78d89a7c893703ac97a90606882855 not found: ID does not exist" containerID="cc10611800a185029b7504531f1aef239f78d89a7c893703ac97a90606882855" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.063025 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc10611800a185029b7504531f1aef239f78d89a7c893703ac97a90606882855"} err="failed to get container status \"cc10611800a185029b7504531f1aef239f78d89a7c893703ac97a90606882855\": rpc error: code = NotFound desc = could not find container \"cc10611800a185029b7504531f1aef239f78d89a7c893703ac97a90606882855\": container with ID starting with cc10611800a185029b7504531f1aef239f78d89a7c893703ac97a90606882855 not found: ID does not exist" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.063045 4751 scope.go:117] "RemoveContainer" containerID="2e2bfeb85bee453c5562087b8952080d63fb468a903b18dd2f8152c589c7b24e" Jan 30 21:20:54 crc kubenswrapper[4751]: E0130 21:20:54.063570 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e2bfeb85bee453c5562087b8952080d63fb468a903b18dd2f8152c589c7b24e\": container with ID starting with 2e2bfeb85bee453c5562087b8952080d63fb468a903b18dd2f8152c589c7b24e not found: ID does not exist" containerID="2e2bfeb85bee453c5562087b8952080d63fb468a903b18dd2f8152c589c7b24e" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.063660 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e2bfeb85bee453c5562087b8952080d63fb468a903b18dd2f8152c589c7b24e"} err="failed to get container status \"2e2bfeb85bee453c5562087b8952080d63fb468a903b18dd2f8152c589c7b24e\": rpc error: code = NotFound desc = could not find container \"2e2bfeb85bee453c5562087b8952080d63fb468a903b18dd2f8152c589c7b24e\": container with ID starting with 2e2bfeb85bee453c5562087b8952080d63fb468a903b18dd2f8152c589c7b24e not found: ID does not exist" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.063730 4751 scope.go:117] "RemoveContainer" containerID="6374876bc7ed115e17f1c5d36bfa76b152a97c3a41fab6547007a48a90913470" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.078461 4751 scope.go:117] "RemoveContainer" containerID="32572f27c9ea03ce6c7b9dc8b2a5e53bbd80e2ecb4680f00bc3edf45a75a5c89" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.094793 4751 scope.go:117] "RemoveContainer" containerID="c1ef4ef3016d2aeab48af690c5e15404a48316a180fc721a730e50ce33561478" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.110687 4751 scope.go:117] "RemoveContainer" containerID="6374876bc7ed115e17f1c5d36bfa76b152a97c3a41fab6547007a48a90913470" Jan 30 21:20:54 crc kubenswrapper[4751]: E0130 21:20:54.111083 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6374876bc7ed115e17f1c5d36bfa76b152a97c3a41fab6547007a48a90913470\": container with ID starting with 6374876bc7ed115e17f1c5d36bfa76b152a97c3a41fab6547007a48a90913470 not found: ID does not exist" containerID="6374876bc7ed115e17f1c5d36bfa76b152a97c3a41fab6547007a48a90913470" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.111123 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6374876bc7ed115e17f1c5d36bfa76b152a97c3a41fab6547007a48a90913470"} err="failed to get container status \"6374876bc7ed115e17f1c5d36bfa76b152a97c3a41fab6547007a48a90913470\": rpc error: code = NotFound desc = could not find container \"6374876bc7ed115e17f1c5d36bfa76b152a97c3a41fab6547007a48a90913470\": container with ID starting with 6374876bc7ed115e17f1c5d36bfa76b152a97c3a41fab6547007a48a90913470 not found: ID does not exist" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.111149 4751 scope.go:117] "RemoveContainer" containerID="32572f27c9ea03ce6c7b9dc8b2a5e53bbd80e2ecb4680f00bc3edf45a75a5c89" Jan 30 21:20:54 crc kubenswrapper[4751]: E0130 21:20:54.111398 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32572f27c9ea03ce6c7b9dc8b2a5e53bbd80e2ecb4680f00bc3edf45a75a5c89\": container with ID starting with 32572f27c9ea03ce6c7b9dc8b2a5e53bbd80e2ecb4680f00bc3edf45a75a5c89 not found: ID does not exist" containerID="32572f27c9ea03ce6c7b9dc8b2a5e53bbd80e2ecb4680f00bc3edf45a75a5c89" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.111421 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32572f27c9ea03ce6c7b9dc8b2a5e53bbd80e2ecb4680f00bc3edf45a75a5c89"} err="failed to get container status \"32572f27c9ea03ce6c7b9dc8b2a5e53bbd80e2ecb4680f00bc3edf45a75a5c89\": rpc error: code = NotFound desc = could not find container \"32572f27c9ea03ce6c7b9dc8b2a5e53bbd80e2ecb4680f00bc3edf45a75a5c89\": container with ID starting with 32572f27c9ea03ce6c7b9dc8b2a5e53bbd80e2ecb4680f00bc3edf45a75a5c89 not found: ID does not exist" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.111434 4751 scope.go:117] "RemoveContainer" containerID="c1ef4ef3016d2aeab48af690c5e15404a48316a180fc721a730e50ce33561478" Jan 30 21:20:54 crc kubenswrapper[4751]: E0130 21:20:54.111617 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1ef4ef3016d2aeab48af690c5e15404a48316a180fc721a730e50ce33561478\": container with ID starting with c1ef4ef3016d2aeab48af690c5e15404a48316a180fc721a730e50ce33561478 not found: ID does not exist" containerID="c1ef4ef3016d2aeab48af690c5e15404a48316a180fc721a730e50ce33561478" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.111632 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ef4ef3016d2aeab48af690c5e15404a48316a180fc721a730e50ce33561478"} err="failed to get container status \"c1ef4ef3016d2aeab48af690c5e15404a48316a180fc721a730e50ce33561478\": rpc error: code = NotFound desc = could not find container \"c1ef4ef3016d2aeab48af690c5e15404a48316a180fc721a730e50ce33561478\": container with ID starting with c1ef4ef3016d2aeab48af690c5e15404a48316a180fc721a730e50ce33561478 not found: ID does not exist" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.127097 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.127135 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.127174 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.127784 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"daa3657d48b883db14b4975f24f93c0b2c6f7eb8738d3c0267f1f4f003ba63aa"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.127833 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://daa3657d48b883db14b4975f24f93c0b2c6f7eb8738d3c0267f1f4f003ba63aa" gracePeriod=600 Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.776255 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.829348 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="daa3657d48b883db14b4975f24f93c0b2c6f7eb8738d3c0267f1f4f003ba63aa" exitCode=0 Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.829368 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"daa3657d48b883db14b4975f24f93c0b2c6f7eb8738d3c0267f1f4f003ba63aa"} Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.829495 4751 scope.go:117] "RemoveContainer" containerID="804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.847804 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-76rml" event={"ID":"cdcb33b0-97a6-4ded-96b6-1c5bd9053977","Type":"ContainerStarted","Data":"6555bc08329fa2ff543d4810ec47f9a72956f19cbf66209a9749cc91438e7744"} Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.850046 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.853958 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.862870 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9lsr5"] Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.906272 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-76rml" podStartSLOduration=2.90620024 podStartE2EDuration="2.90620024s" podCreationTimestamp="2026-01-30 21:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:20:54.881560066 +0000 UTC m=+393.627382745" watchObservedRunningTime="2026-01-30 21:20:54.90620024 +0000 UTC m=+393.652022899" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.230915 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-btf57"] Jan 30 21:20:55 crc kubenswrapper[4751]: E0130 21:20:55.231459 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59ef52d-2f47-42ac-a233-0285be317cc9" containerName="extract-utilities" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231482 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59ef52d-2f47-42ac-a233-0285be317cc9" containerName="extract-utilities" Jan 30 21:20:55 crc kubenswrapper[4751]: E0130 21:20:55.231497 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59ef52d-2f47-42ac-a233-0285be317cc9" containerName="registry-server" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231507 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59ef52d-2f47-42ac-a233-0285be317cc9" containerName="registry-server" Jan 30 21:20:55 crc kubenswrapper[4751]: E0130 21:20:55.231516 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5243a1a5-2eaa-4437-b10e-602439c7c838" containerName="marketplace-operator" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231524 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5243a1a5-2eaa-4437-b10e-602439c7c838" containerName="marketplace-operator" Jan 30 21:20:55 crc kubenswrapper[4751]: E0130 21:20:55.231533 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94e03be5-809d-49ba-9318-6222131628f5" containerName="registry-server" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231540 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="94e03be5-809d-49ba-9318-6222131628f5" containerName="registry-server" Jan 30 21:20:55 crc kubenswrapper[4751]: E0130 21:20:55.231554 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5de678c2-f43a-44fa-ab58-259f765c3e31" containerName="extract-utilities" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231561 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5de678c2-f43a-44fa-ab58-259f765c3e31" containerName="extract-utilities" Jan 30 21:20:55 crc kubenswrapper[4751]: E0130 21:20:55.231574 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" containerName="extract-content" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231581 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" containerName="extract-content" Jan 30 21:20:55 crc kubenswrapper[4751]: E0130 21:20:55.231594 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94e03be5-809d-49ba-9318-6222131628f5" containerName="extract-content" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231601 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="94e03be5-809d-49ba-9318-6222131628f5" containerName="extract-content" Jan 30 21:20:55 crc kubenswrapper[4751]: E0130 21:20:55.231612 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" containerName="extract-utilities" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231620 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" containerName="extract-utilities" Jan 30 21:20:55 crc kubenswrapper[4751]: E0130 21:20:55.231630 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5de678c2-f43a-44fa-ab58-259f765c3e31" containerName="registry-server" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231639 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5de678c2-f43a-44fa-ab58-259f765c3e31" containerName="registry-server" Jan 30 21:20:55 crc kubenswrapper[4751]: E0130 21:20:55.231650 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59ef52d-2f47-42ac-a233-0285be317cc9" containerName="extract-content" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231657 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59ef52d-2f47-42ac-a233-0285be317cc9" containerName="extract-content" Jan 30 21:20:55 crc kubenswrapper[4751]: E0130 21:20:55.231668 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" containerName="registry-server" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231676 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" containerName="registry-server" Jan 30 21:20:55 crc kubenswrapper[4751]: E0130 21:20:55.231684 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94e03be5-809d-49ba-9318-6222131628f5" containerName="extract-utilities" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231691 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="94e03be5-809d-49ba-9318-6222131628f5" containerName="extract-utilities" Jan 30 21:20:55 crc kubenswrapper[4751]: E0130 21:20:55.231703 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5de678c2-f43a-44fa-ab58-259f765c3e31" containerName="extract-content" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231710 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5de678c2-f43a-44fa-ab58-259f765c3e31" containerName="extract-content" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231811 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5243a1a5-2eaa-4437-b10e-602439c7c838" containerName="marketplace-operator" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231828 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a59ef52d-2f47-42ac-a233-0285be317cc9" containerName="registry-server" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231838 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="94e03be5-809d-49ba-9318-6222131628f5" containerName="registry-server" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231853 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5de678c2-f43a-44fa-ab58-259f765c3e31" containerName="registry-server" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231862 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" containerName="registry-server" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.232690 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.234491 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.286297 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-btf57"] Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.372062 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a24b1f1-0656-41ef-826d-c6c40f96b470-catalog-content\") pod \"redhat-marketplace-btf57\" (UID: \"6a24b1f1-0656-41ef-826d-c6c40f96b470\") " pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.372128 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a24b1f1-0656-41ef-826d-c6c40f96b470-utilities\") pod \"redhat-marketplace-btf57\" (UID: \"6a24b1f1-0656-41ef-826d-c6c40f96b470\") " pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.372168 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8stj\" (UniqueName: \"kubernetes.io/projected/6a24b1f1-0656-41ef-826d-c6c40f96b470-kube-api-access-z8stj\") pod \"redhat-marketplace-btf57\" (UID: \"6a24b1f1-0656-41ef-826d-c6c40f96b470\") " pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.473598 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a24b1f1-0656-41ef-826d-c6c40f96b470-catalog-content\") pod \"redhat-marketplace-btf57\" (UID: \"6a24b1f1-0656-41ef-826d-c6c40f96b470\") " pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.473665 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a24b1f1-0656-41ef-826d-c6c40f96b470-utilities\") pod \"redhat-marketplace-btf57\" (UID: \"6a24b1f1-0656-41ef-826d-c6c40f96b470\") " pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.473710 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8stj\" (UniqueName: \"kubernetes.io/projected/6a24b1f1-0656-41ef-826d-c6c40f96b470-kube-api-access-z8stj\") pod \"redhat-marketplace-btf57\" (UID: \"6a24b1f1-0656-41ef-826d-c6c40f96b470\") " pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.474241 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a24b1f1-0656-41ef-826d-c6c40f96b470-utilities\") pod \"redhat-marketplace-btf57\" (UID: \"6a24b1f1-0656-41ef-826d-c6c40f96b470\") " pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.474543 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a24b1f1-0656-41ef-826d-c6c40f96b470-catalog-content\") pod \"redhat-marketplace-btf57\" (UID: \"6a24b1f1-0656-41ef-826d-c6c40f96b470\") " pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.493407 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8stj\" (UniqueName: \"kubernetes.io/projected/6a24b1f1-0656-41ef-826d-c6c40f96b470-kube-api-access-z8stj\") pod \"redhat-marketplace-btf57\" (UID: \"6a24b1f1-0656-41ef-826d-c6c40f96b470\") " pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.558412 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.902995 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"a1b797d24a7a7f0cfe28e0e7b1326aa242a6fa28ef5d30064b33f02362b2f1a6"} Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.961828 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-btf57"] Jan 30 21:20:55 crc kubenswrapper[4751]: W0130 21:20:55.969554 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a24b1f1_0656_41ef_826d_c6c40f96b470.slice/crio-e2bcb606a7c5b09ddbf81239b5c38048d9cdd9c5406a2b4fba58566803a5b46b WatchSource:0}: Error finding container e2bcb606a7c5b09ddbf81239b5c38048d9cdd9c5406a2b4fba58566803a5b46b: Status 404 returned error can't find the container with id e2bcb606a7c5b09ddbf81239b5c38048d9cdd9c5406a2b4fba58566803a5b46b Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.210033 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p4hxc"] Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.211173 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.213267 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.223123 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p4hxc"] Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.385852 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d6jg\" (UniqueName: \"kubernetes.io/projected/448ce159-6181-433b-a28a-d00b9240b5af-kube-api-access-8d6jg\") pod \"redhat-operators-p4hxc\" (UID: \"448ce159-6181-433b-a28a-d00b9240b5af\") " pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.386250 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/448ce159-6181-433b-a28a-d00b9240b5af-catalog-content\") pod \"redhat-operators-p4hxc\" (UID: \"448ce159-6181-433b-a28a-d00b9240b5af\") " pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.386291 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/448ce159-6181-433b-a28a-d00b9240b5af-utilities\") pod \"redhat-operators-p4hxc\" (UID: \"448ce159-6181-433b-a28a-d00b9240b5af\") " pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.487410 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/448ce159-6181-433b-a28a-d00b9240b5af-catalog-content\") pod \"redhat-operators-p4hxc\" (UID: \"448ce159-6181-433b-a28a-d00b9240b5af\") " pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.487472 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/448ce159-6181-433b-a28a-d00b9240b5af-utilities\") pod \"redhat-operators-p4hxc\" (UID: \"448ce159-6181-433b-a28a-d00b9240b5af\") " pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.487580 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d6jg\" (UniqueName: \"kubernetes.io/projected/448ce159-6181-433b-a28a-d00b9240b5af-kube-api-access-8d6jg\") pod \"redhat-operators-p4hxc\" (UID: \"448ce159-6181-433b-a28a-d00b9240b5af\") " pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.487970 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/448ce159-6181-433b-a28a-d00b9240b5af-utilities\") pod \"redhat-operators-p4hxc\" (UID: \"448ce159-6181-433b-a28a-d00b9240b5af\") " pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.488097 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/448ce159-6181-433b-a28a-d00b9240b5af-catalog-content\") pod \"redhat-operators-p4hxc\" (UID: \"448ce159-6181-433b-a28a-d00b9240b5af\") " pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.516117 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d6jg\" (UniqueName: \"kubernetes.io/projected/448ce159-6181-433b-a28a-d00b9240b5af-kube-api-access-8d6jg\") pod \"redhat-operators-p4hxc\" (UID: \"448ce159-6181-433b-a28a-d00b9240b5af\") " pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.539453 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.914676 4751 generic.go:334] "Generic (PLEG): container finished" podID="6a24b1f1-0656-41ef-826d-c6c40f96b470" containerID="bbac4a5fe3fc00609faebe7f98affa8ef8408a492e79ad4eb2e51f42853acfd7" exitCode=0 Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.916917 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btf57" event={"ID":"6a24b1f1-0656-41ef-826d-c6c40f96b470","Type":"ContainerDied","Data":"bbac4a5fe3fc00609faebe7f98affa8ef8408a492e79ad4eb2e51f42853acfd7"} Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.917419 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btf57" event={"ID":"6a24b1f1-0656-41ef-826d-c6c40f96b470","Type":"ContainerStarted","Data":"e2bcb606a7c5b09ddbf81239b5c38048d9cdd9c5406a2b4fba58566803a5b46b"} Jan 30 21:20:57 crc kubenswrapper[4751]: I0130 21:20:57.005509 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p4hxc"] Jan 30 21:20:57 crc kubenswrapper[4751]: W0130 21:20:57.012011 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod448ce159_6181_433b_a28a_d00b9240b5af.slice/crio-73ad198924f3e1eade0a31a7aa5614d242dcbb52f38d6f5161410d402c09b507 WatchSource:0}: Error finding container 73ad198924f3e1eade0a31a7aa5614d242dcbb52f38d6f5161410d402c09b507: Status 404 returned error can't find the container with id 73ad198924f3e1eade0a31a7aa5614d242dcbb52f38d6f5161410d402c09b507 Jan 30 21:20:57 crc kubenswrapper[4751]: I0130 21:20:57.922056 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btf57" event={"ID":"6a24b1f1-0656-41ef-826d-c6c40f96b470","Type":"ContainerStarted","Data":"b5fe421c84c49a0fce9c766932eb37dc6ebd8f10a339e43911d566e5bf55820f"} Jan 30 21:20:57 crc kubenswrapper[4751]: I0130 21:20:57.924023 4751 generic.go:334] "Generic (PLEG): container finished" podID="448ce159-6181-433b-a28a-d00b9240b5af" containerID="10d985df0a9120f84aedb7a8499aa2e73fa1eb168ac9332a258bbeadbd76d96e" exitCode=0 Jan 30 21:20:57 crc kubenswrapper[4751]: I0130 21:20:57.924072 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4hxc" event={"ID":"448ce159-6181-433b-a28a-d00b9240b5af","Type":"ContainerDied","Data":"10d985df0a9120f84aedb7a8499aa2e73fa1eb168ac9332a258bbeadbd76d96e"} Jan 30 21:20:57 crc kubenswrapper[4751]: I0130 21:20:57.924116 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4hxc" event={"ID":"448ce159-6181-433b-a28a-d00b9240b5af","Type":"ContainerStarted","Data":"73ad198924f3e1eade0a31a7aa5614d242dcbb52f38d6f5161410d402c09b507"} Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.014817 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-twcnd"] Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.016302 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.061863 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.067968 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-twcnd"] Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.106305 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-utilities\") pod \"certified-operators-twcnd\" (UID: \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\") " pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.106638 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-catalog-content\") pod \"certified-operators-twcnd\" (UID: \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\") " pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.106775 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqk8q\" (UniqueName: \"kubernetes.io/projected/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-kube-api-access-qqk8q\") pod \"certified-operators-twcnd\" (UID: \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\") " pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.208073 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-catalog-content\") pod \"certified-operators-twcnd\" (UID: \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\") " pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.208147 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqk8q\" (UniqueName: \"kubernetes.io/projected/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-kube-api-access-qqk8q\") pod \"certified-operators-twcnd\" (UID: \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\") " pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.208180 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-utilities\") pod \"certified-operators-twcnd\" (UID: \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\") " pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.208629 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-catalog-content\") pod \"certified-operators-twcnd\" (UID: \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\") " pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.208649 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-utilities\") pod \"certified-operators-twcnd\" (UID: \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\") " pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.230679 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqk8q\" (UniqueName: \"kubernetes.io/projected/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-kube-api-access-qqk8q\") pod \"certified-operators-twcnd\" (UID: \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\") " pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.374816 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.774577 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-twcnd"] Jan 30 21:20:58 crc kubenswrapper[4751]: W0130 21:20:58.782807 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac49c6a1_fa74_49f3_ba94_c5a469df4a93.slice/crio-1789e26083b6d1b5bcaf1c28e823207fbb2d904374cfefdfc648a991a801687a WatchSource:0}: Error finding container 1789e26083b6d1b5bcaf1c28e823207fbb2d904374cfefdfc648a991a801687a: Status 404 returned error can't find the container with id 1789e26083b6d1b5bcaf1c28e823207fbb2d904374cfefdfc648a991a801687a Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.929929 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twcnd" event={"ID":"ac49c6a1-fa74-49f3-ba94-c5a469df4a93","Type":"ContainerStarted","Data":"1789e26083b6d1b5bcaf1c28e823207fbb2d904374cfefdfc648a991a801687a"} Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.931546 4751 generic.go:334] "Generic (PLEG): container finished" podID="6a24b1f1-0656-41ef-826d-c6c40f96b470" containerID="b5fe421c84c49a0fce9c766932eb37dc6ebd8f10a339e43911d566e5bf55820f" exitCode=0 Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.931597 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btf57" event={"ID":"6a24b1f1-0656-41ef-826d-c6c40f96b470","Type":"ContainerDied","Data":"b5fe421c84c49a0fce9c766932eb37dc6ebd8f10a339e43911d566e5bf55820f"} Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.933127 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4hxc" event={"ID":"448ce159-6181-433b-a28a-d00b9240b5af","Type":"ContainerStarted","Data":"8c6af28f5b6624524db9760ac1812d7255cfb7aa12f4c630cb631a41508a66c5"} Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.010884 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bd2xs"] Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.012262 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.014188 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.018912 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bd2xs"] Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.120773 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kcw4\" (UniqueName: \"kubernetes.io/projected/a791b2a3-aead-4130-bdfa-e219f2d47593-kube-api-access-7kcw4\") pod \"community-operators-bd2xs\" (UID: \"a791b2a3-aead-4130-bdfa-e219f2d47593\") " pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.120845 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a791b2a3-aead-4130-bdfa-e219f2d47593-utilities\") pod \"community-operators-bd2xs\" (UID: \"a791b2a3-aead-4130-bdfa-e219f2d47593\") " pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.120884 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a791b2a3-aead-4130-bdfa-e219f2d47593-catalog-content\") pod \"community-operators-bd2xs\" (UID: \"a791b2a3-aead-4130-bdfa-e219f2d47593\") " pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.222185 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a791b2a3-aead-4130-bdfa-e219f2d47593-utilities\") pod \"community-operators-bd2xs\" (UID: \"a791b2a3-aead-4130-bdfa-e219f2d47593\") " pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.222502 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a791b2a3-aead-4130-bdfa-e219f2d47593-catalog-content\") pod \"community-operators-bd2xs\" (UID: \"a791b2a3-aead-4130-bdfa-e219f2d47593\") " pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.222637 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kcw4\" (UniqueName: \"kubernetes.io/projected/a791b2a3-aead-4130-bdfa-e219f2d47593-kube-api-access-7kcw4\") pod \"community-operators-bd2xs\" (UID: \"a791b2a3-aead-4130-bdfa-e219f2d47593\") " pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.224079 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a791b2a3-aead-4130-bdfa-e219f2d47593-utilities\") pod \"community-operators-bd2xs\" (UID: \"a791b2a3-aead-4130-bdfa-e219f2d47593\") " pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.224928 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a791b2a3-aead-4130-bdfa-e219f2d47593-catalog-content\") pod \"community-operators-bd2xs\" (UID: \"a791b2a3-aead-4130-bdfa-e219f2d47593\") " pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.243438 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kcw4\" (UniqueName: \"kubernetes.io/projected/a791b2a3-aead-4130-bdfa-e219f2d47593-kube-api-access-7kcw4\") pod \"community-operators-bd2xs\" (UID: \"a791b2a3-aead-4130-bdfa-e219f2d47593\") " pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.360484 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.728257 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bd2xs"] Jan 30 21:20:59 crc kubenswrapper[4751]: W0130 21:20:59.737453 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda791b2a3_aead_4130_bdfa_e219f2d47593.slice/crio-434124a9abf250cc8847456cd8dfc444504ed2c5f79f019406475bd5e02dd626 WatchSource:0}: Error finding container 434124a9abf250cc8847456cd8dfc444504ed2c5f79f019406475bd5e02dd626: Status 404 returned error can't find the container with id 434124a9abf250cc8847456cd8dfc444504ed2c5f79f019406475bd5e02dd626 Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.938873 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd2xs" event={"ID":"a791b2a3-aead-4130-bdfa-e219f2d47593","Type":"ContainerStarted","Data":"434124a9abf250cc8847456cd8dfc444504ed2c5f79f019406475bd5e02dd626"} Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.941388 4751 generic.go:334] "Generic (PLEG): container finished" podID="ac49c6a1-fa74-49f3-ba94-c5a469df4a93" containerID="bb7468b7d7c0079e6174ab6fab8062e8d6fe8734e0fcc33a217d950b9c4934f4" exitCode=0 Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.941452 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twcnd" event={"ID":"ac49c6a1-fa74-49f3-ba94-c5a469df4a93","Type":"ContainerDied","Data":"bb7468b7d7c0079e6174ab6fab8062e8d6fe8734e0fcc33a217d950b9c4934f4"} Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.944282 4751 generic.go:334] "Generic (PLEG): container finished" podID="448ce159-6181-433b-a28a-d00b9240b5af" containerID="8c6af28f5b6624524db9760ac1812d7255cfb7aa12f4c630cb631a41508a66c5" exitCode=0 Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.944343 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4hxc" event={"ID":"448ce159-6181-433b-a28a-d00b9240b5af","Type":"ContainerDied","Data":"8c6af28f5b6624524db9760ac1812d7255cfb7aa12f4c630cb631a41508a66c5"} Jan 30 21:21:00 crc kubenswrapper[4751]: I0130 21:21:00.951458 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btf57" event={"ID":"6a24b1f1-0656-41ef-826d-c6c40f96b470","Type":"ContainerStarted","Data":"7b0cb114f2b94c0af64389530dc0e77b4ef4178db18be6009544673f334a8088"} Jan 30 21:21:00 crc kubenswrapper[4751]: I0130 21:21:00.953176 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4hxc" event={"ID":"448ce159-6181-433b-a28a-d00b9240b5af","Type":"ContainerStarted","Data":"a6b343dd72b5235871a55e1a2c2def12bf611b5a0982df3c0c87934e222e51ce"} Jan 30 21:21:00 crc kubenswrapper[4751]: I0130 21:21:00.955580 4751 generic.go:334] "Generic (PLEG): container finished" podID="a791b2a3-aead-4130-bdfa-e219f2d47593" containerID="ab432e40787fc0f8c27455630b3e162f083b0e2d799d4a3e7e2a6dfb88ac3b16" exitCode=0 Jan 30 21:21:00 crc kubenswrapper[4751]: I0130 21:21:00.955612 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd2xs" event={"ID":"a791b2a3-aead-4130-bdfa-e219f2d47593","Type":"ContainerDied","Data":"ab432e40787fc0f8c27455630b3e162f083b0e2d799d4a3e7e2a6dfb88ac3b16"} Jan 30 21:21:00 crc kubenswrapper[4751]: I0130 21:21:00.990887 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-btf57" podStartSLOduration=3.215963105 podStartE2EDuration="5.990871471s" podCreationTimestamp="2026-01-30 21:20:55 +0000 UTC" firstStartedPulling="2026-01-30 21:20:56.927830133 +0000 UTC m=+395.673652782" lastFinishedPulling="2026-01-30 21:20:59.702738489 +0000 UTC m=+398.448561148" observedRunningTime="2026-01-30 21:21:00.971638461 +0000 UTC m=+399.717461140" watchObservedRunningTime="2026-01-30 21:21:00.990871471 +0000 UTC m=+399.736694120" Jan 30 21:21:01 crc kubenswrapper[4751]: I0130 21:21:01.039674 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p4hxc" podStartSLOduration=2.363838826 podStartE2EDuration="5.039653928s" podCreationTimestamp="2026-01-30 21:20:56 +0000 UTC" firstStartedPulling="2026-01-30 21:20:57.925601466 +0000 UTC m=+396.671424115" lastFinishedPulling="2026-01-30 21:21:00.601416568 +0000 UTC m=+399.347239217" observedRunningTime="2026-01-30 21:21:01.03789329 +0000 UTC m=+399.783715939" watchObservedRunningTime="2026-01-30 21:21:01.039653928 +0000 UTC m=+399.785476597" Jan 30 21:21:01 crc kubenswrapper[4751]: I0130 21:21:01.962468 4751 generic.go:334] "Generic (PLEG): container finished" podID="ac49c6a1-fa74-49f3-ba94-c5a469df4a93" containerID="2c82591f69d50ae83fda7597991bd617784911392dd33cf4f25ec660904d8e1e" exitCode=0 Jan 30 21:21:01 crc kubenswrapper[4751]: I0130 21:21:01.962668 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twcnd" event={"ID":"ac49c6a1-fa74-49f3-ba94-c5a469df4a93","Type":"ContainerDied","Data":"2c82591f69d50ae83fda7597991bd617784911392dd33cf4f25ec660904d8e1e"} Jan 30 21:21:02 crc kubenswrapper[4751]: I0130 21:21:02.970229 4751 generic.go:334] "Generic (PLEG): container finished" podID="a791b2a3-aead-4130-bdfa-e219f2d47593" containerID="046ce5e1f77fe5269aa0733495a774c7014a135ba89622c5ae3b5e42a5e2bcc2" exitCode=0 Jan 30 21:21:02 crc kubenswrapper[4751]: I0130 21:21:02.970470 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd2xs" event={"ID":"a791b2a3-aead-4130-bdfa-e219f2d47593","Type":"ContainerDied","Data":"046ce5e1f77fe5269aa0733495a774c7014a135ba89622c5ae3b5e42a5e2bcc2"} Jan 30 21:21:04 crc kubenswrapper[4751]: I0130 21:21:04.982082 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twcnd" event={"ID":"ac49c6a1-fa74-49f3-ba94-c5a469df4a93","Type":"ContainerStarted","Data":"da576de8f1c9a0effcc2ee958957d7e00c5cf40151114d08518c5b0c29f0fc29"} Jan 30 21:21:04 crc kubenswrapper[4751]: I0130 21:21:04.983822 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd2xs" event={"ID":"a791b2a3-aead-4130-bdfa-e219f2d47593","Type":"ContainerStarted","Data":"960f258866c4433eda726c04fdab80b057c22c0920935676513c71ebdb592216"} Jan 30 21:21:04 crc kubenswrapper[4751]: I0130 21:21:04.997290 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-twcnd" podStartSLOduration=4.159214164 podStartE2EDuration="7.997274789s" podCreationTimestamp="2026-01-30 21:20:57 +0000 UTC" firstStartedPulling="2026-01-30 21:20:59.942578713 +0000 UTC m=+398.688401362" lastFinishedPulling="2026-01-30 21:21:03.780639338 +0000 UTC m=+402.526461987" observedRunningTime="2026-01-30 21:21:04.996052817 +0000 UTC m=+403.741875466" watchObservedRunningTime="2026-01-30 21:21:04.997274789 +0000 UTC m=+403.743097448" Jan 30 21:21:05 crc kubenswrapper[4751]: I0130 21:21:05.015287 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bd2xs" podStartSLOduration=4.095982573 podStartE2EDuration="7.015267056s" podCreationTimestamp="2026-01-30 21:20:58 +0000 UTC" firstStartedPulling="2026-01-30 21:21:00.960007508 +0000 UTC m=+399.705830167" lastFinishedPulling="2026-01-30 21:21:03.879292001 +0000 UTC m=+402.625114650" observedRunningTime="2026-01-30 21:21:05.014987208 +0000 UTC m=+403.760809857" watchObservedRunningTime="2026-01-30 21:21:05.015267056 +0000 UTC m=+403.761089705" Jan 30 21:21:05 crc kubenswrapper[4751]: I0130 21:21:05.559686 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:21:05 crc kubenswrapper[4751]: I0130 21:21:05.560000 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:21:05 crc kubenswrapper[4751]: I0130 21:21:05.611161 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:21:06 crc kubenswrapper[4751]: I0130 21:21:06.034148 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:21:06 crc kubenswrapper[4751]: I0130 21:21:06.540246 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:21:06 crc kubenswrapper[4751]: I0130 21:21:06.540673 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:21:07 crc kubenswrapper[4751]: I0130 21:21:07.589598 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p4hxc" podUID="448ce159-6181-433b-a28a-d00b9240b5af" containerName="registry-server" probeResult="failure" output=< Jan 30 21:21:07 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:21:07 crc kubenswrapper[4751]: > Jan 30 21:21:08 crc kubenswrapper[4751]: I0130 21:21:08.375550 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:21:08 crc kubenswrapper[4751]: I0130 21:21:08.376135 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:21:08 crc kubenswrapper[4751]: I0130 21:21:08.430257 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:21:09 crc kubenswrapper[4751]: I0130 21:21:09.053959 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:21:09 crc kubenswrapper[4751]: I0130 21:21:09.361456 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:21:09 crc kubenswrapper[4751]: I0130 21:21:09.361608 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:21:09 crc kubenswrapper[4751]: I0130 21:21:09.423814 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:21:10 crc kubenswrapper[4751]: I0130 21:21:10.046275 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:21:16 crc kubenswrapper[4751]: I0130 21:21:16.591869 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:21:16 crc kubenswrapper[4751]: I0130 21:21:16.649706 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:21:19 crc kubenswrapper[4751]: I0130 21:21:19.915156 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" podUID="73d0a80a-e569-428a-b251-33f28e06fffd" containerName="registry" containerID="cri-o://28cd6c03baa20199626418ec8759b36e2e4744509c9dd86f5db386c640e77f51" gracePeriod=30 Jan 30 21:21:20 crc kubenswrapper[4751]: I0130 21:21:20.399753 4751 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-9lsr5 container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.16:5000/healthz\": dial tcp 10.217.0.16:5000: connect: connection refused" start-of-body= Jan 30 21:21:20 crc kubenswrapper[4751]: I0130 21:21:20.399808 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" podUID="73d0a80a-e569-428a-b251-33f28e06fffd" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.16:5000/healthz\": dial tcp 10.217.0.16:5000: connect: connection refused" Jan 30 21:21:20 crc kubenswrapper[4751]: I0130 21:21:20.987017 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.070913 4751 generic.go:334] "Generic (PLEG): container finished" podID="73d0a80a-e569-428a-b251-33f28e06fffd" containerID="28cd6c03baa20199626418ec8759b36e2e4744509c9dd86f5db386c640e77f51" exitCode=0 Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.071024 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" event={"ID":"73d0a80a-e569-428a-b251-33f28e06fffd","Type":"ContainerDied","Data":"28cd6c03baa20199626418ec8759b36e2e4744509c9dd86f5db386c640e77f51"} Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.071212 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" event={"ID":"73d0a80a-e569-428a-b251-33f28e06fffd","Type":"ContainerDied","Data":"af1fafb4fa1bc5d4e5549e32e14665bb190720767667d7915533461f80e83d20"} Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.071300 4751 scope.go:117] "RemoveContainer" containerID="28cd6c03baa20199626418ec8759b36e2e4744509c9dd86f5db386c640e77f51" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.071618 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.091551 4751 scope.go:117] "RemoveContainer" containerID="28cd6c03baa20199626418ec8759b36e2e4744509c9dd86f5db386c640e77f51" Jan 30 21:21:21 crc kubenswrapper[4751]: E0130 21:21:21.092098 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28cd6c03baa20199626418ec8759b36e2e4744509c9dd86f5db386c640e77f51\": container with ID starting with 28cd6c03baa20199626418ec8759b36e2e4744509c9dd86f5db386c640e77f51 not found: ID does not exist" containerID="28cd6c03baa20199626418ec8759b36e2e4744509c9dd86f5db386c640e77f51" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.092140 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28cd6c03baa20199626418ec8759b36e2e4744509c9dd86f5db386c640e77f51"} err="failed to get container status \"28cd6c03baa20199626418ec8759b36e2e4744509c9dd86f5db386c640e77f51\": rpc error: code = NotFound desc = could not find container \"28cd6c03baa20199626418ec8759b36e2e4744509c9dd86f5db386c640e77f51\": container with ID starting with 28cd6c03baa20199626418ec8759b36e2e4744509c9dd86f5db386c640e77f51 not found: ID does not exist" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.175143 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cp5d\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-kube-api-access-2cp5d\") pod \"73d0a80a-e569-428a-b251-33f28e06fffd\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.175191 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73d0a80a-e569-428a-b251-33f28e06fffd-trusted-ca\") pod \"73d0a80a-e569-428a-b251-33f28e06fffd\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.175239 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73d0a80a-e569-428a-b251-33f28e06fffd-registry-certificates\") pod \"73d0a80a-e569-428a-b251-33f28e06fffd\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.175286 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73d0a80a-e569-428a-b251-33f28e06fffd-ca-trust-extracted\") pod \"73d0a80a-e569-428a-b251-33f28e06fffd\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.175344 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-registry-tls\") pod \"73d0a80a-e569-428a-b251-33f28e06fffd\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.175555 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"73d0a80a-e569-428a-b251-33f28e06fffd\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.175605 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73d0a80a-e569-428a-b251-33f28e06fffd-installation-pull-secrets\") pod \"73d0a80a-e569-428a-b251-33f28e06fffd\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.175626 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-bound-sa-token\") pod \"73d0a80a-e569-428a-b251-33f28e06fffd\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.176273 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73d0a80a-e569-428a-b251-33f28e06fffd-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "73d0a80a-e569-428a-b251-33f28e06fffd" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.177589 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73d0a80a-e569-428a-b251-33f28e06fffd-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "73d0a80a-e569-428a-b251-33f28e06fffd" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.182128 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73d0a80a-e569-428a-b251-33f28e06fffd-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "73d0a80a-e569-428a-b251-33f28e06fffd" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.184221 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-kube-api-access-2cp5d" (OuterVolumeSpecName: "kube-api-access-2cp5d") pod "73d0a80a-e569-428a-b251-33f28e06fffd" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd"). InnerVolumeSpecName "kube-api-access-2cp5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.184812 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "73d0a80a-e569-428a-b251-33f28e06fffd" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.185607 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "73d0a80a-e569-428a-b251-33f28e06fffd" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.216214 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73d0a80a-e569-428a-b251-33f28e06fffd-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "73d0a80a-e569-428a-b251-33f28e06fffd" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.238128 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "73d0a80a-e569-428a-b251-33f28e06fffd" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.277147 4751 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.277180 4751 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73d0a80a-e569-428a-b251-33f28e06fffd-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.277191 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73d0a80a-e569-428a-b251-33f28e06fffd-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.277199 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cp5d\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-kube-api-access-2cp5d\") on node \"crc\" DevicePath \"\"" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.277208 4751 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73d0a80a-e569-428a-b251-33f28e06fffd-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.277215 4751 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73d0a80a-e569-428a-b251-33f28e06fffd-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.277223 4751 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.403175 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9lsr5"] Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.413495 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9lsr5"] Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.983127 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73d0a80a-e569-428a-b251-33f28e06fffd" path="/var/lib/kubelet/pods/73d0a80a-e569-428a-b251-33f28e06fffd/volumes" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.619551 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg"] Jan 30 21:21:24 crc kubenswrapper[4751]: E0130 21:21:24.621782 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73d0a80a-e569-428a-b251-33f28e06fffd" containerName="registry" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.622043 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="73d0a80a-e569-428a-b251-33f28e06fffd" containerName="registry" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.622536 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="73d0a80a-e569-428a-b251-33f28e06fffd" containerName="registry" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.623545 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.627227 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.629533 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg"] Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.629872 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.630180 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.630458 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.630842 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.823390 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/b354db71-ccf5-4280-86a3-faf88514fb9d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-kfrjg\" (UID: \"b354db71-ccf5-4280-86a3-faf88514fb9d\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.823480 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxvm8\" (UniqueName: \"kubernetes.io/projected/b354db71-ccf5-4280-86a3-faf88514fb9d-kube-api-access-jxvm8\") pod \"cluster-monitoring-operator-6d5b84845-kfrjg\" (UID: \"b354db71-ccf5-4280-86a3-faf88514fb9d\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.823566 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/b354db71-ccf5-4280-86a3-faf88514fb9d-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-kfrjg\" (UID: \"b354db71-ccf5-4280-86a3-faf88514fb9d\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.925112 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/b354db71-ccf5-4280-86a3-faf88514fb9d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-kfrjg\" (UID: \"b354db71-ccf5-4280-86a3-faf88514fb9d\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.925202 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxvm8\" (UniqueName: \"kubernetes.io/projected/b354db71-ccf5-4280-86a3-faf88514fb9d-kube-api-access-jxvm8\") pod \"cluster-monitoring-operator-6d5b84845-kfrjg\" (UID: \"b354db71-ccf5-4280-86a3-faf88514fb9d\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.925305 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/b354db71-ccf5-4280-86a3-faf88514fb9d-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-kfrjg\" (UID: \"b354db71-ccf5-4280-86a3-faf88514fb9d\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.927084 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/b354db71-ccf5-4280-86a3-faf88514fb9d-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-kfrjg\" (UID: \"b354db71-ccf5-4280-86a3-faf88514fb9d\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.933236 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/b354db71-ccf5-4280-86a3-faf88514fb9d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-kfrjg\" (UID: \"b354db71-ccf5-4280-86a3-faf88514fb9d\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.956429 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxvm8\" (UniqueName: \"kubernetes.io/projected/b354db71-ccf5-4280-86a3-faf88514fb9d-kube-api-access-jxvm8\") pod \"cluster-monitoring-operator-6d5b84845-kfrjg\" (UID: \"b354db71-ccf5-4280-86a3-faf88514fb9d\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" Jan 30 21:21:25 crc kubenswrapper[4751]: I0130 21:21:25.256455 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" Jan 30 21:21:25 crc kubenswrapper[4751]: I0130 21:21:25.721240 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg"] Jan 30 21:21:25 crc kubenswrapper[4751]: W0130 21:21:25.728945 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb354db71_ccf5_4280_86a3_faf88514fb9d.slice/crio-45e4b1a29aca1e926794eb3c64f21a5c1ac916d5b573db998a665277bbe51635 WatchSource:0}: Error finding container 45e4b1a29aca1e926794eb3c64f21a5c1ac916d5b573db998a665277bbe51635: Status 404 returned error can't find the container with id 45e4b1a29aca1e926794eb3c64f21a5c1ac916d5b573db998a665277bbe51635 Jan 30 21:21:26 crc kubenswrapper[4751]: I0130 21:21:26.105917 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" event={"ID":"b354db71-ccf5-4280-86a3-faf88514fb9d","Type":"ContainerStarted","Data":"45e4b1a29aca1e926794eb3c64f21a5c1ac916d5b573db998a665277bbe51635"} Jan 30 21:21:28 crc kubenswrapper[4751]: I0130 21:21:28.912339 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2w6rn"] Jan 30 21:21:28 crc kubenswrapper[4751]: I0130 21:21:28.913621 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2w6rn" Jan 30 21:21:28 crc kubenswrapper[4751]: I0130 21:21:28.916806 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Jan 30 21:21:28 crc kubenswrapper[4751]: I0130 21:21:28.916919 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-5ttch" Jan 30 21:21:28 crc kubenswrapper[4751]: I0130 21:21:28.921286 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2w6rn"] Jan 30 21:21:29 crc kubenswrapper[4751]: I0130 21:21:29.095304 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/c9c572af-7f6f-4be7-b19e-7adaff281d9d-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-2w6rn\" (UID: \"c9c572af-7f6f-4be7-b19e-7adaff281d9d\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2w6rn" Jan 30 21:21:29 crc kubenswrapper[4751]: I0130 21:21:29.124441 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" event={"ID":"b354db71-ccf5-4280-86a3-faf88514fb9d","Type":"ContainerStarted","Data":"34dfb55120c98144a517444ef52397b872626df176854d7005a33e6525ee98ad"} Jan 30 21:21:29 crc kubenswrapper[4751]: I0130 21:21:29.196528 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/c9c572af-7f6f-4be7-b19e-7adaff281d9d-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-2w6rn\" (UID: \"c9c572af-7f6f-4be7-b19e-7adaff281d9d\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2w6rn" Jan 30 21:21:29 crc kubenswrapper[4751]: I0130 21:21:29.212091 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/c9c572af-7f6f-4be7-b19e-7adaff281d9d-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-2w6rn\" (UID: \"c9c572af-7f6f-4be7-b19e-7adaff281d9d\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2w6rn" Jan 30 21:21:29 crc kubenswrapper[4751]: I0130 21:21:29.230130 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2w6rn" Jan 30 21:21:29 crc kubenswrapper[4751]: I0130 21:21:29.687601 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" podStartSLOduration=3.181566757 podStartE2EDuration="5.687579044s" podCreationTimestamp="2026-01-30 21:21:24 +0000 UTC" firstStartedPulling="2026-01-30 21:21:25.734056942 +0000 UTC m=+424.479879631" lastFinishedPulling="2026-01-30 21:21:28.240069269 +0000 UTC m=+426.985891918" observedRunningTime="2026-01-30 21:21:29.153206019 +0000 UTC m=+427.899028678" watchObservedRunningTime="2026-01-30 21:21:29.687579044 +0000 UTC m=+428.433401713" Jan 30 21:21:29 crc kubenswrapper[4751]: I0130 21:21:29.689879 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2w6rn"] Jan 30 21:21:30 crc kubenswrapper[4751]: I0130 21:21:30.131363 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2w6rn" event={"ID":"c9c572af-7f6f-4be7-b19e-7adaff281d9d","Type":"ContainerStarted","Data":"28c0fd8c034a9d8717f4e0c385c1f937434c304e48b8a698ad2927f5f5b754bc"} Jan 30 21:21:32 crc kubenswrapper[4751]: I0130 21:21:32.146452 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2w6rn" event={"ID":"c9c572af-7f6f-4be7-b19e-7adaff281d9d","Type":"ContainerStarted","Data":"ae4d18660aee0924d88aac7436bd40570778571b35b57d7edc4a5d979543ceb9"} Jan 30 21:21:32 crc kubenswrapper[4751]: I0130 21:21:32.146900 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2w6rn" Jan 30 21:21:32 crc kubenswrapper[4751]: I0130 21:21:32.154951 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2w6rn" Jan 30 21:21:32 crc kubenswrapper[4751]: I0130 21:21:32.168523 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2w6rn" podStartSLOduration=2.623716995 podStartE2EDuration="4.168507675s" podCreationTimestamp="2026-01-30 21:21:28 +0000 UTC" firstStartedPulling="2026-01-30 21:21:29.703224056 +0000 UTC m=+428.449046715" lastFinishedPulling="2026-01-30 21:21:31.248014746 +0000 UTC m=+429.993837395" observedRunningTime="2026-01-30 21:21:32.166206633 +0000 UTC m=+430.912029322" watchObservedRunningTime="2026-01-30 21:21:32.168507675 +0000 UTC m=+430.914330324" Jan 30 21:21:32 crc kubenswrapper[4751]: I0130 21:21:32.979089 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-nfkcx"] Jan 30 21:21:32 crc kubenswrapper[4751]: I0130 21:21:32.980654 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:32 crc kubenswrapper[4751]: I0130 21:21:32.983622 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-5qnnt" Jan 30 21:21:32 crc kubenswrapper[4751]: I0130 21:21:32.983672 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Jan 30 21:21:32 crc kubenswrapper[4751]: I0130 21:21:32.984567 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Jan 30 21:21:32 crc kubenswrapper[4751]: I0130 21:21:32.984600 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.002653 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-nfkcx"] Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.049091 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/68479b95-39ca-4900-af4e-ee0c7d98998c-metrics-client-ca\") pod \"prometheus-operator-db54df47d-nfkcx\" (UID: \"68479b95-39ca-4900-af4e-ee0c7d98998c\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.049412 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/68479b95-39ca-4900-af4e-ee0c7d98998c-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-nfkcx\" (UID: \"68479b95-39ca-4900-af4e-ee0c7d98998c\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.049532 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b4qp\" (UniqueName: \"kubernetes.io/projected/68479b95-39ca-4900-af4e-ee0c7d98998c-kube-api-access-2b4qp\") pod \"prometheus-operator-db54df47d-nfkcx\" (UID: \"68479b95-39ca-4900-af4e-ee0c7d98998c\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.049616 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/68479b95-39ca-4900-af4e-ee0c7d98998c-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-nfkcx\" (UID: \"68479b95-39ca-4900-af4e-ee0c7d98998c\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.156016 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/68479b95-39ca-4900-af4e-ee0c7d98998c-metrics-client-ca\") pod \"prometheus-operator-db54df47d-nfkcx\" (UID: \"68479b95-39ca-4900-af4e-ee0c7d98998c\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.156074 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/68479b95-39ca-4900-af4e-ee0c7d98998c-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-nfkcx\" (UID: \"68479b95-39ca-4900-af4e-ee0c7d98998c\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.156096 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b4qp\" (UniqueName: \"kubernetes.io/projected/68479b95-39ca-4900-af4e-ee0c7d98998c-kube-api-access-2b4qp\") pod \"prometheus-operator-db54df47d-nfkcx\" (UID: \"68479b95-39ca-4900-af4e-ee0c7d98998c\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.156122 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/68479b95-39ca-4900-af4e-ee0c7d98998c-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-nfkcx\" (UID: \"68479b95-39ca-4900-af4e-ee0c7d98998c\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:33 crc kubenswrapper[4751]: E0130 21:21:33.156262 4751 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Jan 30 21:21:33 crc kubenswrapper[4751]: E0130 21:21:33.156311 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68479b95-39ca-4900-af4e-ee0c7d98998c-prometheus-operator-tls podName:68479b95-39ca-4900-af4e-ee0c7d98998c nodeName:}" failed. No retries permitted until 2026-01-30 21:21:33.656294738 +0000 UTC m=+432.402117387 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/68479b95-39ca-4900-af4e-ee0c7d98998c-prometheus-operator-tls") pod "prometheus-operator-db54df47d-nfkcx" (UID: "68479b95-39ca-4900-af4e-ee0c7d98998c") : secret "prometheus-operator-tls" not found Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.157350 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/68479b95-39ca-4900-af4e-ee0c7d98998c-metrics-client-ca\") pod \"prometheus-operator-db54df47d-nfkcx\" (UID: \"68479b95-39ca-4900-af4e-ee0c7d98998c\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.175767 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/68479b95-39ca-4900-af4e-ee0c7d98998c-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-nfkcx\" (UID: \"68479b95-39ca-4900-af4e-ee0c7d98998c\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.181457 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b4qp\" (UniqueName: \"kubernetes.io/projected/68479b95-39ca-4900-af4e-ee0c7d98998c-kube-api-access-2b4qp\") pod \"prometheus-operator-db54df47d-nfkcx\" (UID: \"68479b95-39ca-4900-af4e-ee0c7d98998c\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.661065 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/68479b95-39ca-4900-af4e-ee0c7d98998c-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-nfkcx\" (UID: \"68479b95-39ca-4900-af4e-ee0c7d98998c\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.670420 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/68479b95-39ca-4900-af4e-ee0c7d98998c-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-nfkcx\" (UID: \"68479b95-39ca-4900-af4e-ee0c7d98998c\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.913792 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:34 crc kubenswrapper[4751]: I0130 21:21:34.379418 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-nfkcx"] Jan 30 21:21:34 crc kubenswrapper[4751]: W0130 21:21:34.389199 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68479b95_39ca_4900_af4e_ee0c7d98998c.slice/crio-6a4e80040e5a9add0247d290704f0018c72e787090cea37c52a67e1f3a56c028 WatchSource:0}: Error finding container 6a4e80040e5a9add0247d290704f0018c72e787090cea37c52a67e1f3a56c028: Status 404 returned error can't find the container with id 6a4e80040e5a9add0247d290704f0018c72e787090cea37c52a67e1f3a56c028 Jan 30 21:21:35 crc kubenswrapper[4751]: I0130 21:21:35.182157 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" event={"ID":"68479b95-39ca-4900-af4e-ee0c7d98998c","Type":"ContainerStarted","Data":"6a4e80040e5a9add0247d290704f0018c72e787090cea37c52a67e1f3a56c028"} Jan 30 21:21:36 crc kubenswrapper[4751]: I0130 21:21:36.189020 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" event={"ID":"68479b95-39ca-4900-af4e-ee0c7d98998c","Type":"ContainerStarted","Data":"576994ff244113730f86bf9e0877acb2855268db8dede46691fcb92d880c6d76"} Jan 30 21:21:36 crc kubenswrapper[4751]: I0130 21:21:36.189317 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" event={"ID":"68479b95-39ca-4900-af4e-ee0c7d98998c","Type":"ContainerStarted","Data":"966198049e04166800964daf2657a5b3d91bac4c8f9e0762697b11d7c89f6531"} Jan 30 21:21:36 crc kubenswrapper[4751]: I0130 21:21:36.216136 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" podStartSLOduration=2.907232624 podStartE2EDuration="4.216109266s" podCreationTimestamp="2026-01-30 21:21:32 +0000 UTC" firstStartedPulling="2026-01-30 21:21:34.392840878 +0000 UTC m=+433.138663537" lastFinishedPulling="2026-01-30 21:21:35.70171753 +0000 UTC m=+434.447540179" observedRunningTime="2026-01-30 21:21:36.211044509 +0000 UTC m=+434.956867198" watchObservedRunningTime="2026-01-30 21:21:36.216109266 +0000 UTC m=+434.961931955" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.358477 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp"] Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.359801 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.362066 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.362353 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.362517 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-px4hv" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.363494 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.378892 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp"] Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.389400 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-59fm8"] Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.390820 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.393202 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c"] Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.394443 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-ch2nh" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.394637 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.394663 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.397591 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.403924 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-jxfsm" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.403952 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.403925 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.409149 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c"] Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.440254 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6b887fa7-4c67-4c26-86cb-e4d18c024c03-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-8qt5c\" (UID: \"6b887fa7-4c67-4c26-86cb-e4d18c024c03\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.440724 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/d3306c1e-22bb-4266-8ede-1a4acb3e3152-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.440776 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/506a5b6f-daef-43b6-a780-a6c727c076fe-metrics-client-ca\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.440821 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/506a5b6f-daef-43b6-a780-a6c727c076fe-root\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.440840 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftwg5\" (UniqueName: \"kubernetes.io/projected/6b887fa7-4c67-4c26-86cb-e4d18c024c03-kube-api-access-ftwg5\") pod \"openshift-state-metrics-566fddb674-8qt5c\" (UID: \"6b887fa7-4c67-4c26-86cb-e4d18c024c03\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.440854 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/506a5b6f-daef-43b6-a780-a6c727c076fe-sys\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.440871 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d3306c1e-22bb-4266-8ede-1a4acb3e3152-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.440887 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d3306c1e-22bb-4266-8ede-1a4acb3e3152-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.440910 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/d3306c1e-22bb-4266-8ede-1a4acb3e3152-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.440928 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvf24\" (UniqueName: \"kubernetes.io/projected/d3306c1e-22bb-4266-8ede-1a4acb3e3152-kube-api-access-lvf24\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.440948 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d3306c1e-22bb-4266-8ede-1a4acb3e3152-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.440971 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/506a5b6f-daef-43b6-a780-a6c727c076fe-node-exporter-wtmp\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.440994 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/506a5b6f-daef-43b6-a780-a6c727c076fe-node-exporter-textfile\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.441012 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/506a5b6f-daef-43b6-a780-a6c727c076fe-node-exporter-tls\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.441035 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6b887fa7-4c67-4c26-86cb-e4d18c024c03-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-8qt5c\" (UID: \"6b887fa7-4c67-4c26-86cb-e4d18c024c03\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.441058 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/506a5b6f-daef-43b6-a780-a6c727c076fe-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.441085 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9qg8\" (UniqueName: \"kubernetes.io/projected/506a5b6f-daef-43b6-a780-a6c727c076fe-kube-api-access-k9qg8\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.441101 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6b887fa7-4c67-4c26-86cb-e4d18c024c03-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-8qt5c\" (UID: \"6b887fa7-4c67-4c26-86cb-e4d18c024c03\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.542199 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9qg8\" (UniqueName: \"kubernetes.io/projected/506a5b6f-daef-43b6-a780-a6c727c076fe-kube-api-access-k9qg8\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.542253 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6b887fa7-4c67-4c26-86cb-e4d18c024c03-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-8qt5c\" (UID: \"6b887fa7-4c67-4c26-86cb-e4d18c024c03\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.542278 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6b887fa7-4c67-4c26-86cb-e4d18c024c03-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-8qt5c\" (UID: \"6b887fa7-4c67-4c26-86cb-e4d18c024c03\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.542302 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/d3306c1e-22bb-4266-8ede-1a4acb3e3152-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.543224 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/d3306c1e-22bb-4266-8ede-1a4acb3e3152-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.544152 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6b887fa7-4c67-4c26-86cb-e4d18c024c03-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-8qt5c\" (UID: \"6b887fa7-4c67-4c26-86cb-e4d18c024c03\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.544910 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/506a5b6f-daef-43b6-a780-a6c727c076fe-metrics-client-ca\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.544987 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/506a5b6f-daef-43b6-a780-a6c727c076fe-root\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545012 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/506a5b6f-daef-43b6-a780-a6c727c076fe-sys\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545038 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftwg5\" (UniqueName: \"kubernetes.io/projected/6b887fa7-4c67-4c26-86cb-e4d18c024c03-kube-api-access-ftwg5\") pod \"openshift-state-metrics-566fddb674-8qt5c\" (UID: \"6b887fa7-4c67-4c26-86cb-e4d18c024c03\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545098 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d3306c1e-22bb-4266-8ede-1a4acb3e3152-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545122 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d3306c1e-22bb-4266-8ede-1a4acb3e3152-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545165 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/d3306c1e-22bb-4266-8ede-1a4acb3e3152-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545189 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvf24\" (UniqueName: \"kubernetes.io/projected/d3306c1e-22bb-4266-8ede-1a4acb3e3152-kube-api-access-lvf24\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545233 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d3306c1e-22bb-4266-8ede-1a4acb3e3152-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545276 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/506a5b6f-daef-43b6-a780-a6c727c076fe-node-exporter-wtmp\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545316 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/506a5b6f-daef-43b6-a780-a6c727c076fe-node-exporter-textfile\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545365 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/506a5b6f-daef-43b6-a780-a6c727c076fe-node-exporter-tls\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545415 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6b887fa7-4c67-4c26-86cb-e4d18c024c03-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-8qt5c\" (UID: \"6b887fa7-4c67-4c26-86cb-e4d18c024c03\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545451 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/506a5b6f-daef-43b6-a780-a6c727c076fe-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545490 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/506a5b6f-daef-43b6-a780-a6c727c076fe-metrics-client-ca\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545571 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/506a5b6f-daef-43b6-a780-a6c727c076fe-sys\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.546767 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d3306c1e-22bb-4266-8ede-1a4acb3e3152-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.548907 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6b887fa7-4c67-4c26-86cb-e4d18c024c03-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-8qt5c\" (UID: \"6b887fa7-4c67-4c26-86cb-e4d18c024c03\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.549591 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d3306c1e-22bb-4266-8ede-1a4acb3e3152-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.549802 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/506a5b6f-daef-43b6-a780-a6c727c076fe-node-exporter-textfile\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.549879 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/506a5b6f-daef-43b6-a780-a6c727c076fe-node-exporter-wtmp\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.550114 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/d3306c1e-22bb-4266-8ede-1a4acb3e3152-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.551417 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6b887fa7-4c67-4c26-86cb-e4d18c024c03-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-8qt5c\" (UID: \"6b887fa7-4c67-4c26-86cb-e4d18c024c03\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.553102 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/506a5b6f-daef-43b6-a780-a6c727c076fe-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.553143 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/506a5b6f-daef-43b6-a780-a6c727c076fe-root\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.553566 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/506a5b6f-daef-43b6-a780-a6c727c076fe-node-exporter-tls\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.561794 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d3306c1e-22bb-4266-8ede-1a4acb3e3152-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.261605 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftwg5\" (UniqueName: \"kubernetes.io/projected/6b887fa7-4c67-4c26-86cb-e4d18c024c03-kube-api-access-ftwg5\") pod \"openshift-state-metrics-566fddb674-8qt5c\" (UID: \"6b887fa7-4c67-4c26-86cb-e4d18c024c03\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.263578 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9qg8\" (UniqueName: \"kubernetes.io/projected/506a5b6f-daef-43b6-a780-a6c727c076fe-kube-api-access-k9qg8\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.278706 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvf24\" (UniqueName: \"kubernetes.io/projected/d3306c1e-22bb-4266-8ede-1a4acb3e3152-kube-api-access-lvf24\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.317995 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.355169 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:39 crc kubenswrapper[4751]: W0130 21:21:39.386424 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod506a5b6f_daef_43b6_a780_a6c727c076fe.slice/crio-362e282743df3ca3aeda5b20c0bdb9051743509b0a7179a1b4db5e9d8ca89b1b WatchSource:0}: Error finding container 362e282743df3ca3aeda5b20c0bdb9051743509b0a7179a1b4db5e9d8ca89b1b: Status 404 returned error can't find the container with id 362e282743df3ca3aeda5b20c0bdb9051743509b0a7179a1b4db5e9d8ca89b1b Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.574061 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.640751 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.642527 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.645682 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.645869 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.645999 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.646115 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.647678 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.647747 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.647787 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-g4pzr" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.647977 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.653107 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.667103 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.762613 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.762662 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-config-volume\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.762683 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-web-config\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.762706 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6e08b15b-9f9e-4437-9222-25bb2f84216e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.762735 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e08b15b-9f9e-4437-9222-25bb2f84216e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.762764 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpp54\" (UniqueName: \"kubernetes.io/projected/6e08b15b-9f9e-4437-9222-25bb2f84216e-kube-api-access-kpp54\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.762782 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.762849 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6e08b15b-9f9e-4437-9222-25bb2f84216e-config-out\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.762896 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6e08b15b-9f9e-4437-9222-25bb2f84216e-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.762914 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.762942 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/6e08b15b-9f9e-4437-9222-25bb2f84216e-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.763023 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.798548 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c"] Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.864339 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6e08b15b-9f9e-4437-9222-25bb2f84216e-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.864604 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.864659 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/6e08b15b-9f9e-4437-9222-25bb2f84216e-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.864691 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.864738 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.864759 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-config-volume\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.864778 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-web-config\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.864820 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6e08b15b-9f9e-4437-9222-25bb2f84216e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.864862 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e08b15b-9f9e-4437-9222-25bb2f84216e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.864892 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpp54\" (UniqueName: \"kubernetes.io/projected/6e08b15b-9f9e-4437-9222-25bb2f84216e-kube-api-access-kpp54\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.864912 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.864934 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6e08b15b-9f9e-4437-9222-25bb2f84216e-config-out\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.868409 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/6e08b15b-9f9e-4437-9222-25bb2f84216e-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.868515 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6e08b15b-9f9e-4437-9222-25bb2f84216e-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.868692 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e08b15b-9f9e-4437-9222-25bb2f84216e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.871813 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6e08b15b-9f9e-4437-9222-25bb2f84216e-config-out\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.872161 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-web-config\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.872405 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.872842 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-config-volume\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.873075 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.873085 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.874020 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6e08b15b-9f9e-4437-9222-25bb2f84216e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.874130 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.882513 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpp54\" (UniqueName: \"kubernetes.io/projected/6e08b15b-9f9e-4437-9222-25bb2f84216e-kube-api-access-kpp54\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.958316 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.992475 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp"] Jan 30 21:21:40 crc kubenswrapper[4751]: I0130 21:21:40.240767 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" event={"ID":"d3306c1e-22bb-4266-8ede-1a4acb3e3152","Type":"ContainerStarted","Data":"748cb8efb35419cf9fccc358a1e0878120a9068199369cf09dc8737aae54117c"} Jan 30 21:21:40 crc kubenswrapper[4751]: I0130 21:21:40.243218 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" event={"ID":"6b887fa7-4c67-4c26-86cb-e4d18c024c03","Type":"ContainerStarted","Data":"4d2a318baf3b452b71e9e20d5b75d98bd306f018832de1625220c87aaf9ff3f4"} Jan 30 21:21:40 crc kubenswrapper[4751]: I0130 21:21:40.243258 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" event={"ID":"6b887fa7-4c67-4c26-86cb-e4d18c024c03","Type":"ContainerStarted","Data":"f579750255028ba6024bd98fe7032f40547cb01911e21e5c163dc8b3207dae4e"} Jan 30 21:21:40 crc kubenswrapper[4751]: I0130 21:21:40.243268 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" event={"ID":"6b887fa7-4c67-4c26-86cb-e4d18c024c03","Type":"ContainerStarted","Data":"d1e15a310a4d8212a71260df00f416466c10c6717211dad84b3b0c4690e614bb"} Jan 30 21:21:40 crc kubenswrapper[4751]: I0130 21:21:40.244358 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-59fm8" event={"ID":"506a5b6f-daef-43b6-a780-a6c727c076fe","Type":"ContainerStarted","Data":"362e282743df3ca3aeda5b20c0bdb9051743509b0a7179a1b4db5e9d8ca89b1b"} Jan 30 21:21:40 crc kubenswrapper[4751]: I0130 21:21:40.403050 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 30 21:21:40 crc kubenswrapper[4751]: W0130 21:21:40.570695 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e08b15b_9f9e_4437_9222_25bb2f84216e.slice/crio-ca7de6e4c0a58160cce1757d31080c82689c0a4574cd7c713a50436982d4bc6d WatchSource:0}: Error finding container ca7de6e4c0a58160cce1757d31080c82689c0a4574cd7c713a50436982d4bc6d: Status 404 returned error can't find the container with id ca7de6e4c0a58160cce1757d31080c82689c0a4574cd7c713a50436982d4bc6d Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.250868 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6e08b15b-9f9e-4437-9222-25bb2f84216e","Type":"ContainerStarted","Data":"ca7de6e4c0a58160cce1757d31080c82689c0a4574cd7c713a50436982d4bc6d"} Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.252795 4751 generic.go:334] "Generic (PLEG): container finished" podID="506a5b6f-daef-43b6-a780-a6c727c076fe" containerID="6ac0be97ad672f4e7248d8ca23810c59cbf2b6b0cfe1c3383a76acf4abf73010" exitCode=0 Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.252848 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-59fm8" event={"ID":"506a5b6f-daef-43b6-a780-a6c727c076fe","Type":"ContainerDied","Data":"6ac0be97ad672f4e7248d8ca23810c59cbf2b6b0cfe1c3383a76acf4abf73010"} Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.544398 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn"] Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.547016 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.548429 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.548797 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.549088 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.549514 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-4s5l3hluq0o23" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.549682 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.549874 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.553575 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-4hwb6" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.566980 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn"] Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.587944 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.587985 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.588002 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/66e2e180-25ad-48cf-90fa-cb472fc3f248-metrics-client-ca\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.588021 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-tls\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.588066 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqlmd\" (UniqueName: \"kubernetes.io/projected/66e2e180-25ad-48cf-90fa-cb472fc3f248-kube-api-access-wqlmd\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.588086 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-grpc-tls\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.588265 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.588338 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.690039 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqlmd\" (UniqueName: \"kubernetes.io/projected/66e2e180-25ad-48cf-90fa-cb472fc3f248-kube-api-access-wqlmd\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.690089 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-grpc-tls\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.690129 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.690150 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.690211 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.690227 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.690243 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/66e2e180-25ad-48cf-90fa-cb472fc3f248-metrics-client-ca\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.690264 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-tls\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.692012 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/66e2e180-25ad-48cf-90fa-cb472fc3f248-metrics-client-ca\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.699920 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.700762 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-grpc-tls\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.701750 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.705889 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.707932 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.708446 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-tls\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.717771 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqlmd\" (UniqueName: \"kubernetes.io/projected/66e2e180-25ad-48cf-90fa-cb472fc3f248-kube-api-access-wqlmd\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.880579 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:42 crc kubenswrapper[4751]: I0130 21:21:42.738258 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn"] Jan 30 21:21:42 crc kubenswrapper[4751]: W0130 21:21:42.746631 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66e2e180_25ad_48cf_90fa_cb472fc3f248.slice/crio-6909ac05d8e920bea233140d5eb05ffe48b9a1d8d31299e484cd62a85af8c186 WatchSource:0}: Error finding container 6909ac05d8e920bea233140d5eb05ffe48b9a1d8d31299e484cd62a85af8c186: Status 404 returned error can't find the container with id 6909ac05d8e920bea233140d5eb05ffe48b9a1d8d31299e484cd62a85af8c186 Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.188925 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d76f88947-6xcwf"] Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.189771 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.199314 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d76f88947-6xcwf"] Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.267381 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-59fm8" event={"ID":"506a5b6f-daef-43b6-a780-a6c727c076fe","Type":"ContainerStarted","Data":"76629e07c62a68371af62ff81941845c90f628dd02ee3184c02d4b8cb2ff0b1b"} Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.267432 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-59fm8" event={"ID":"506a5b6f-daef-43b6-a780-a6c727c076fe","Type":"ContainerStarted","Data":"aa4225b5d34351673317cd1ff7134de28cfcc3f9bc81a97ec40244955f13b417"} Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.269913 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" event={"ID":"d3306c1e-22bb-4266-8ede-1a4acb3e3152","Type":"ContainerStarted","Data":"f1c1e070c93ae770acf66c9f6da146f75a04f8c8776810b568aa8979945a5785"} Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.269969 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" event={"ID":"d3306c1e-22bb-4266-8ede-1a4acb3e3152","Type":"ContainerStarted","Data":"39a44bd5757e3088990fdf8fbc71ca13e4fefc329fa942a20c5e39689f78b960"} Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.269993 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" event={"ID":"d3306c1e-22bb-4266-8ede-1a4acb3e3152","Type":"ContainerStarted","Data":"65a0710b2c734b91de3bd04ff1bb61972ea696a8f11ef2f25d3b56b4d5b75e9d"} Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.271420 4751 generic.go:334] "Generic (PLEG): container finished" podID="6e08b15b-9f9e-4437-9222-25bb2f84216e" containerID="72074fff18d8b31892f38fedacd663120f7dc5e1c6d79135055a34214329a2fc" exitCode=0 Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.271470 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6e08b15b-9f9e-4437-9222-25bb2f84216e","Type":"ContainerDied","Data":"72074fff18d8b31892f38fedacd663120f7dc5e1c6d79135055a34214329a2fc"} Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.273606 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" event={"ID":"6b887fa7-4c67-4c26-86cb-e4d18c024c03","Type":"ContainerStarted","Data":"92f219ed510797cff85970e4cff400d2e89ec5f6cd001553e062bb517a6cc2b5"} Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.274557 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" event={"ID":"66e2e180-25ad-48cf-90fa-cb472fc3f248","Type":"ContainerStarted","Data":"6909ac05d8e920bea233140d5eb05ffe48b9a1d8d31299e484cd62a85af8c186"} Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.292372 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-59fm8" podStartSLOduration=4.076673917 podStartE2EDuration="5.292352374s" podCreationTimestamp="2026-01-30 21:21:38 +0000 UTC" firstStartedPulling="2026-01-30 21:21:39.393184619 +0000 UTC m=+438.139007268" lastFinishedPulling="2026-01-30 21:21:40.608863076 +0000 UTC m=+439.354685725" observedRunningTime="2026-01-30 21:21:43.288279424 +0000 UTC m=+442.034102093" watchObservedRunningTime="2026-01-30 21:21:43.292352374 +0000 UTC m=+442.038175033" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.308989 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm6ks\" (UniqueName: \"kubernetes.io/projected/ac1ab634-ceee-441a-8c73-eee8464c68f6-kube-api-access-gm6ks\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.309035 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-serving-cert\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.309143 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-oauth-config\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.309174 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-service-ca\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.309256 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-trusted-ca-bundle\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.309300 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-config\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.309347 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-oauth-serving-cert\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.339427 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" podStartSLOduration=3.113118848 podStartE2EDuration="5.339409614s" podCreationTimestamp="2026-01-30 21:21:38 +0000 UTC" firstStartedPulling="2026-01-30 21:21:40.073868414 +0000 UTC m=+438.819691063" lastFinishedPulling="2026-01-30 21:21:42.30015918 +0000 UTC m=+441.045981829" observedRunningTime="2026-01-30 21:21:43.337610696 +0000 UTC m=+442.083433335" watchObservedRunningTime="2026-01-30 21:21:43.339409614 +0000 UTC m=+442.085232263" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.359003 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" podStartSLOduration=3.047977109 podStartE2EDuration="5.358988293s" podCreationTimestamp="2026-01-30 21:21:38 +0000 UTC" firstStartedPulling="2026-01-30 21:21:39.988349205 +0000 UTC m=+438.734171864" lastFinishedPulling="2026-01-30 21:21:42.299360389 +0000 UTC m=+441.045183048" observedRunningTime="2026-01-30 21:21:43.356130095 +0000 UTC m=+442.101952744" watchObservedRunningTime="2026-01-30 21:21:43.358988293 +0000 UTC m=+442.104810942" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.410599 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-oauth-config\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.410638 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-service-ca\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.410658 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-trusted-ca-bundle\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.410705 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-config\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.410731 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-oauth-serving-cert\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.410798 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm6ks\" (UniqueName: \"kubernetes.io/projected/ac1ab634-ceee-441a-8c73-eee8464c68f6-kube-api-access-gm6ks\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.410819 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-serving-cert\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.411909 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-oauth-serving-cert\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.412448 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-service-ca\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.412476 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-config\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.413138 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-trusted-ca-bundle\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.416130 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-oauth-config\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.416498 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-serving-cert\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.428081 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm6ks\" (UniqueName: \"kubernetes.io/projected/ac1ab634-ceee-441a-8c73-eee8464c68f6-kube-api-access-gm6ks\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.506943 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.906544 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d76f88947-6xcwf"] Jan 30 21:21:43 crc kubenswrapper[4751]: W0130 21:21:43.915677 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac1ab634_ceee_441a_8c73_eee8464c68f6.slice/crio-90dda4e7bb2586d629d12de8b6b6da54f391bd7370318df6020c6ebd1a54b36f WatchSource:0}: Error finding container 90dda4e7bb2586d629d12de8b6b6da54f391bd7370318df6020c6ebd1a54b36f: Status 404 returned error can't find the container with id 90dda4e7bb2586d629d12de8b6b6da54f391bd7370318df6020c6ebd1a54b36f Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.146058 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-5d87cc6655-97t9z"] Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.146813 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-5d87cc6655-97t9z" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.148752 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.149578 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.150051 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-5d87cc6655-97t9z"] Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.222182 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/591ae6a6-bb7e-4805-a1bd-e45b7624468d-monitoring-plugin-cert\") pod \"monitoring-plugin-5d87cc6655-97t9z\" (UID: \"591ae6a6-bb7e-4805-a1bd-e45b7624468d\") " pod="openshift-monitoring/monitoring-plugin-5d87cc6655-97t9z" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.280361 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d76f88947-6xcwf" event={"ID":"ac1ab634-ceee-441a-8c73-eee8464c68f6","Type":"ContainerStarted","Data":"952f96e77f9dd45e8024b79a67aea877348b424583582d68c2d398258ba4346d"} Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.280406 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d76f88947-6xcwf" event={"ID":"ac1ab634-ceee-441a-8c73-eee8464c68f6","Type":"ContainerStarted","Data":"90dda4e7bb2586d629d12de8b6b6da54f391bd7370318df6020c6ebd1a54b36f"} Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.297600 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5d76f88947-6xcwf" podStartSLOduration=1.297584189 podStartE2EDuration="1.297584189s" podCreationTimestamp="2026-01-30 21:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:21:44.297039415 +0000 UTC m=+443.042862064" watchObservedRunningTime="2026-01-30 21:21:44.297584189 +0000 UTC m=+443.043406838" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.323215 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/591ae6a6-bb7e-4805-a1bd-e45b7624468d-monitoring-plugin-cert\") pod \"monitoring-plugin-5d87cc6655-97t9z\" (UID: \"591ae6a6-bb7e-4805-a1bd-e45b7624468d\") " pod="openshift-monitoring/monitoring-plugin-5d87cc6655-97t9z" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.330935 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/591ae6a6-bb7e-4805-a1bd-e45b7624468d-monitoring-plugin-cert\") pod \"monitoring-plugin-5d87cc6655-97t9z\" (UID: \"591ae6a6-bb7e-4805-a1bd-e45b7624468d\") " pod="openshift-monitoring/monitoring-plugin-5d87cc6655-97t9z" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.405196 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-7d5557bc66-sc8vg"] Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.406034 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.408749 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-6lnl8" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.408768 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.408938 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.409075 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.409156 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-1phlphhk6hasm" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.409370 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.420923 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7d5557bc66-sc8vg"] Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.466806 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-5d87cc6655-97t9z" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.525574 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f11068e1-9b5d-488b-bd21-9986af1e86f6-client-ca-bundle\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.525644 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/f11068e1-9b5d-488b-bd21-9986af1e86f6-metrics-server-audit-profiles\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.525679 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f11068e1-9b5d-488b-bd21-9986af1e86f6-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.525723 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/f11068e1-9b5d-488b-bd21-9986af1e86f6-secret-metrics-client-certs\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.525768 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/f11068e1-9b5d-488b-bd21-9986af1e86f6-secret-metrics-server-tls\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.525800 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/f11068e1-9b5d-488b-bd21-9986af1e86f6-audit-log\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.525818 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxmwl\" (UniqueName: \"kubernetes.io/projected/f11068e1-9b5d-488b-bd21-9986af1e86f6-kube-api-access-nxmwl\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.635203 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f11068e1-9b5d-488b-bd21-9986af1e86f6-client-ca-bundle\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.635259 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/f11068e1-9b5d-488b-bd21-9986af1e86f6-metrics-server-audit-profiles\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.635287 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f11068e1-9b5d-488b-bd21-9986af1e86f6-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.635344 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/f11068e1-9b5d-488b-bd21-9986af1e86f6-secret-metrics-client-certs\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.635386 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/f11068e1-9b5d-488b-bd21-9986af1e86f6-secret-metrics-server-tls\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.635410 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/f11068e1-9b5d-488b-bd21-9986af1e86f6-audit-log\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.635424 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxmwl\" (UniqueName: \"kubernetes.io/projected/f11068e1-9b5d-488b-bd21-9986af1e86f6-kube-api-access-nxmwl\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.636184 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/f11068e1-9b5d-488b-bd21-9986af1e86f6-audit-log\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.636824 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/f11068e1-9b5d-488b-bd21-9986af1e86f6-metrics-server-audit-profiles\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.637353 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f11068e1-9b5d-488b-bd21-9986af1e86f6-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.638621 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/f11068e1-9b5d-488b-bd21-9986af1e86f6-secret-metrics-client-certs\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.640563 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/f11068e1-9b5d-488b-bd21-9986af1e86f6-secret-metrics-server-tls\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.646566 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f11068e1-9b5d-488b-bd21-9986af1e86f6-client-ca-bundle\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.657794 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxmwl\" (UniqueName: \"kubernetes.io/projected/f11068e1-9b5d-488b-bd21-9986af1e86f6-kube-api-access-nxmwl\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.723815 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.766902 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.768762 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.772194 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.772604 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-46tbn" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.776432 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.777751 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.778417 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.779124 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-1snvg1vdm8jdq" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.779374 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.779804 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.780089 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.780759 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.781557 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.803637 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.813914 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.816914 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.837730 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.837786 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/795f9139-b7e5-4bb2-86f6-e2046f4190de-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.837815 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.837833 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-web-config\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.837851 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.837942 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.837982 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/795f9139-b7e5-4bb2-86f6-e2046f4190de-config-out\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.838014 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/795f9139-b7e5-4bb2-86f6-e2046f4190de-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.838042 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.838088 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvk55\" (UniqueName: \"kubernetes.io/projected/795f9139-b7e5-4bb2-86f6-e2046f4190de-kube-api-access-wvk55\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.838138 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.838155 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.838175 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.838200 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.838260 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.838288 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.838302 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.838316 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-config\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.939803 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.939848 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/795f9139-b7e5-4bb2-86f6-e2046f4190de-config-out\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.939872 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/795f9139-b7e5-4bb2-86f6-e2046f4190de-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.939893 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.939917 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvk55\" (UniqueName: \"kubernetes.io/projected/795f9139-b7e5-4bb2-86f6-e2046f4190de-kube-api-access-wvk55\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.939940 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.939955 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.939971 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.939987 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.940012 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.940039 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.940057 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-config\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.940077 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.940109 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.940139 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/795f9139-b7e5-4bb2-86f6-e2046f4190de-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.940161 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.940180 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-web-config\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.940196 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.942081 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.942179 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.942579 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/795f9139-b7e5-4bb2-86f6-e2046f4190de-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.942848 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.944284 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/795f9139-b7e5-4bb2-86f6-e2046f4190de-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.945206 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.946043 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-config\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.946760 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.947001 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.947159 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/795f9139-b7e5-4bb2-86f6-e2046f4190de-config-out\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.947247 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.948436 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.948926 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-web-config\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.951274 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.953386 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.953959 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.955121 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.957796 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvk55\" (UniqueName: \"kubernetes.io/projected/795f9139-b7e5-4bb2-86f6-e2046f4190de-kube-api-access-wvk55\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:45 crc kubenswrapper[4751]: I0130 21:21:45.088104 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:46 crc kubenswrapper[4751]: I0130 21:21:46.193539 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 30 21:21:46 crc kubenswrapper[4751]: W0130 21:21:46.200188 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod795f9139_b7e5_4bb2_86f6_e2046f4190de.slice/crio-3a9feadcfdb7de473d7d65ef5606a62c2f5976b66d34d50e4e9cd02ee72182bb WatchSource:0}: Error finding container 3a9feadcfdb7de473d7d65ef5606a62c2f5976b66d34d50e4e9cd02ee72182bb: Status 404 returned error can't find the container with id 3a9feadcfdb7de473d7d65ef5606a62c2f5976b66d34d50e4e9cd02ee72182bb Jan 30 21:21:46 crc kubenswrapper[4751]: I0130 21:21:46.243078 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7d5557bc66-sc8vg"] Jan 30 21:21:46 crc kubenswrapper[4751]: W0130 21:21:46.250393 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf11068e1_9b5d_488b_bd21_9986af1e86f6.slice/crio-0ba3919dba51339e9c8a906da887181b7ab3180a18bdfa768d6e8f0ffe73c57a WatchSource:0}: Error finding container 0ba3919dba51339e9c8a906da887181b7ab3180a18bdfa768d6e8f0ffe73c57a: Status 404 returned error can't find the container with id 0ba3919dba51339e9c8a906da887181b7ab3180a18bdfa768d6e8f0ffe73c57a Jan 30 21:21:46 crc kubenswrapper[4751]: I0130 21:21:46.295751 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6e08b15b-9f9e-4437-9222-25bb2f84216e","Type":"ContainerStarted","Data":"180bef4c38217333ae8586722a0fcdec017cb64127db32c846c26a5378a6d0f1"} Jan 30 21:21:46 crc kubenswrapper[4751]: I0130 21:21:46.295790 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6e08b15b-9f9e-4437-9222-25bb2f84216e","Type":"ContainerStarted","Data":"e441d7db8b5f38a6897a6ae27ef65077645c8bd49534b38cc007124485b5be2a"} Jan 30 21:21:46 crc kubenswrapper[4751]: I0130 21:21:46.295804 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6e08b15b-9f9e-4437-9222-25bb2f84216e","Type":"ContainerStarted","Data":"6fb1f42417530492a2a64c2c063c82343873633b79375e6efb2f79588873d11f"} Jan 30 21:21:46 crc kubenswrapper[4751]: I0130 21:21:46.295819 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6e08b15b-9f9e-4437-9222-25bb2f84216e","Type":"ContainerStarted","Data":"1cc3daad45de2425a6be02cd8d9b19e2199fc2a230be0c3913d05ad7c63d5e81"} Jan 30 21:21:46 crc kubenswrapper[4751]: I0130 21:21:46.296197 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-5d87cc6655-97t9z"] Jan 30 21:21:46 crc kubenswrapper[4751]: I0130 21:21:46.297786 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" event={"ID":"f11068e1-9b5d-488b-bd21-9986af1e86f6","Type":"ContainerStarted","Data":"0ba3919dba51339e9c8a906da887181b7ab3180a18bdfa768d6e8f0ffe73c57a"} Jan 30 21:21:46 crc kubenswrapper[4751]: I0130 21:21:46.299548 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"795f9139-b7e5-4bb2-86f6-e2046f4190de","Type":"ContainerStarted","Data":"3a9feadcfdb7de473d7d65ef5606a62c2f5976b66d34d50e4e9cd02ee72182bb"} Jan 30 21:21:46 crc kubenswrapper[4751]: I0130 21:21:46.301960 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" event={"ID":"66e2e180-25ad-48cf-90fa-cb472fc3f248","Type":"ContainerStarted","Data":"196d44f4e2f8510aef54e50978e5d7722ea622790a479570378733710ad55723"} Jan 30 21:21:46 crc kubenswrapper[4751]: I0130 21:21:46.301997 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" event={"ID":"66e2e180-25ad-48cf-90fa-cb472fc3f248","Type":"ContainerStarted","Data":"2c2c080f1f4f8f31b4ed026423b9b25e1a578af77fce4d43f23d7c3cc452fa75"} Jan 30 21:21:46 crc kubenswrapper[4751]: I0130 21:21:46.302007 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" event={"ID":"66e2e180-25ad-48cf-90fa-cb472fc3f248","Type":"ContainerStarted","Data":"8eb2e570c1ccde5624b6cb4a6d7a8e89930358be9245da9f6645eeeb16734890"} Jan 30 21:21:46 crc kubenswrapper[4751]: W0130 21:21:46.303070 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod591ae6a6_bb7e_4805_a1bd_e45b7624468d.slice/crio-8ecd6f1b695761cda3f8accb34c0882b3cd5f31a2c1125dea1ef07238e7e814f WatchSource:0}: Error finding container 8ecd6f1b695761cda3f8accb34c0882b3cd5f31a2c1125dea1ef07238e7e814f: Status 404 returned error can't find the container with id 8ecd6f1b695761cda3f8accb34c0882b3cd5f31a2c1125dea1ef07238e7e814f Jan 30 21:21:47 crc kubenswrapper[4751]: I0130 21:21:47.316385 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6e08b15b-9f9e-4437-9222-25bb2f84216e","Type":"ContainerStarted","Data":"4fbb3000b655d5b37336e698b6373c6d4773408006a1af25172bf719751eee1e"} Jan 30 21:21:47 crc kubenswrapper[4751]: I0130 21:21:47.318061 4751 generic.go:334] "Generic (PLEG): container finished" podID="795f9139-b7e5-4bb2-86f6-e2046f4190de" containerID="8f796d49626603846702268c2aa0059b309f2abdbd6cee803d2aa2e5e920a30a" exitCode=0 Jan 30 21:21:47 crc kubenswrapper[4751]: I0130 21:21:47.318173 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"795f9139-b7e5-4bb2-86f6-e2046f4190de","Type":"ContainerDied","Data":"8f796d49626603846702268c2aa0059b309f2abdbd6cee803d2aa2e5e920a30a"} Jan 30 21:21:47 crc kubenswrapper[4751]: I0130 21:21:47.319262 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-5d87cc6655-97t9z" event={"ID":"591ae6a6-bb7e-4805-a1bd-e45b7624468d","Type":"ContainerStarted","Data":"8ecd6f1b695761cda3f8accb34c0882b3cd5f31a2c1125dea1ef07238e7e814f"} Jan 30 21:21:48 crc kubenswrapper[4751]: I0130 21:21:48.333486 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6e08b15b-9f9e-4437-9222-25bb2f84216e","Type":"ContainerStarted","Data":"ff5a651f0e5c8521da8e6ffd4d2c5a92bf955e3e3bbcbd6bd99d107df1cefdf0"} Jan 30 21:21:48 crc kubenswrapper[4751]: I0130 21:21:48.336769 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" event={"ID":"66e2e180-25ad-48cf-90fa-cb472fc3f248","Type":"ContainerStarted","Data":"ed635854263a020d72ca07b93ee7f68107cacfbe9bfab787abcab4886f1996c3"} Jan 30 21:21:48 crc kubenswrapper[4751]: I0130 21:21:48.338159 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-5d87cc6655-97t9z" event={"ID":"591ae6a6-bb7e-4805-a1bd-e45b7624468d","Type":"ContainerStarted","Data":"6a36cc5818bfeabcf94b1f30154e13a928417b4f94e4c7ca837328dedd0b1e8c"} Jan 30 21:21:48 crc kubenswrapper[4751]: I0130 21:21:48.338542 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-5d87cc6655-97t9z" Jan 30 21:21:48 crc kubenswrapper[4751]: I0130 21:21:48.348649 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-5d87cc6655-97t9z" Jan 30 21:21:48 crc kubenswrapper[4751]: I0130 21:21:48.365845 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.28656763 podStartE2EDuration="9.365823569s" podCreationTimestamp="2026-01-30 21:21:39 +0000 UTC" firstStartedPulling="2026-01-30 21:21:40.573584443 +0000 UTC m=+439.319407092" lastFinishedPulling="2026-01-30 21:21:47.652840382 +0000 UTC m=+446.398663031" observedRunningTime="2026-01-30 21:21:48.359663982 +0000 UTC m=+447.105486711" watchObservedRunningTime="2026-01-30 21:21:48.365823569 +0000 UTC m=+447.111646208" Jan 30 21:21:48 crc kubenswrapper[4751]: I0130 21:21:48.375207 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-5d87cc6655-97t9z" podStartSLOduration=3.021759787 podStartE2EDuration="4.375175391s" podCreationTimestamp="2026-01-30 21:21:44 +0000 UTC" firstStartedPulling="2026-01-30 21:21:46.307009453 +0000 UTC m=+445.052832102" lastFinishedPulling="2026-01-30 21:21:47.660425057 +0000 UTC m=+446.406247706" observedRunningTime="2026-01-30 21:21:48.374234786 +0000 UTC m=+447.120057435" watchObservedRunningTime="2026-01-30 21:21:48.375175391 +0000 UTC m=+447.120998110" Jan 30 21:21:49 crc kubenswrapper[4751]: I0130 21:21:49.346503 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" event={"ID":"66e2e180-25ad-48cf-90fa-cb472fc3f248","Type":"ContainerStarted","Data":"b128cc5fa29e6669cbe27455b4929c8b1acafc8353a651fd0202279a561de062"} Jan 30 21:21:49 crc kubenswrapper[4751]: I0130 21:21:49.346979 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" event={"ID":"66e2e180-25ad-48cf-90fa-cb472fc3f248","Type":"ContainerStarted","Data":"be5dfbb37f3ca1ab2aedc56792a72ff2e4690790ced3f926fa12a4d595d40556"} Jan 30 21:21:49 crc kubenswrapper[4751]: I0130 21:21:49.347012 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:49 crc kubenswrapper[4751]: I0130 21:21:49.349463 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" event={"ID":"f11068e1-9b5d-488b-bd21-9986af1e86f6","Type":"ContainerStarted","Data":"ee12ac60f913b32e50a983263db96c6c8025db7518e26f6a69a324b683eb4324"} Jan 30 21:21:49 crc kubenswrapper[4751]: I0130 21:21:49.370042 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" podStartSLOduration=3.468668578 podStartE2EDuration="8.370023096s" podCreationTimestamp="2026-01-30 21:21:41 +0000 UTC" firstStartedPulling="2026-01-30 21:21:42.749377217 +0000 UTC m=+441.495199866" lastFinishedPulling="2026-01-30 21:21:47.650731735 +0000 UTC m=+446.396554384" observedRunningTime="2026-01-30 21:21:49.367871437 +0000 UTC m=+448.113694086" watchObservedRunningTime="2026-01-30 21:21:49.370023096 +0000 UTC m=+448.115845745" Jan 30 21:21:49 crc kubenswrapper[4751]: I0130 21:21:49.389843 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" podStartSLOduration=3.237525051 podStartE2EDuration="5.38982074s" podCreationTimestamp="2026-01-30 21:21:44 +0000 UTC" firstStartedPulling="2026-01-30 21:21:46.254707961 +0000 UTC m=+445.000530610" lastFinishedPulling="2026-01-30 21:21:48.40700361 +0000 UTC m=+447.152826299" observedRunningTime="2026-01-30 21:21:49.385983887 +0000 UTC m=+448.131806536" watchObservedRunningTime="2026-01-30 21:21:49.38982074 +0000 UTC m=+448.135643389" Jan 30 21:21:51 crc kubenswrapper[4751]: I0130 21:21:51.939876 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:52 crc kubenswrapper[4751]: I0130 21:21:52.384007 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"795f9139-b7e5-4bb2-86f6-e2046f4190de","Type":"ContainerStarted","Data":"f44561733f676d255afcdff20f3e736384d858863e1bd25f17a3727702a1c2f7"} Jan 30 21:21:52 crc kubenswrapper[4751]: I0130 21:21:52.384406 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"795f9139-b7e5-4bb2-86f6-e2046f4190de","Type":"ContainerStarted","Data":"6f44f7c5500f180c1a3ef9cdfcb4189ced77c0866d2a57151290b053cf09e851"} Jan 30 21:21:53 crc kubenswrapper[4751]: I0130 21:21:53.396647 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"795f9139-b7e5-4bb2-86f6-e2046f4190de","Type":"ContainerStarted","Data":"60e5abe4626a6289065c1fdf88f00f1b7f086d6ac6e4b7e1327999d42c0c46e4"} Jan 30 21:21:53 crc kubenswrapper[4751]: I0130 21:21:53.396709 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"795f9139-b7e5-4bb2-86f6-e2046f4190de","Type":"ContainerStarted","Data":"dacfdf568d1f441938a1074af2232864d76b6ece275c85e0f38626a50b743af5"} Jan 30 21:21:53 crc kubenswrapper[4751]: I0130 21:21:53.396729 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"795f9139-b7e5-4bb2-86f6-e2046f4190de","Type":"ContainerStarted","Data":"b053be7a7a81ddb7ec1f6470714c02702596726fa448e3f653466119710716dd"} Jan 30 21:21:53 crc kubenswrapper[4751]: I0130 21:21:53.396747 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"795f9139-b7e5-4bb2-86f6-e2046f4190de","Type":"ContainerStarted","Data":"619b5a55d051aca854bcffbea21dde6d50663e29ea3e824a57e94d697ea35c9e"} Jan 30 21:21:53 crc kubenswrapper[4751]: I0130 21:21:53.463852 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=5.429709476 podStartE2EDuration="9.463829474s" podCreationTimestamp="2026-01-30 21:21:44 +0000 UTC" firstStartedPulling="2026-01-30 21:21:47.320050118 +0000 UTC m=+446.065872767" lastFinishedPulling="2026-01-30 21:21:51.354170076 +0000 UTC m=+450.099992765" observedRunningTime="2026-01-30 21:21:53.443558247 +0000 UTC m=+452.189380946" watchObservedRunningTime="2026-01-30 21:21:53.463829474 +0000 UTC m=+452.209652133" Jan 30 21:21:53 crc kubenswrapper[4751]: I0130 21:21:53.507894 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:53 crc kubenswrapper[4751]: I0130 21:21:53.511080 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:53 crc kubenswrapper[4751]: I0130 21:21:53.516231 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:54 crc kubenswrapper[4751]: I0130 21:21:54.407139 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:54 crc kubenswrapper[4751]: I0130 21:21:54.489734 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-7bw65"] Jan 30 21:21:55 crc kubenswrapper[4751]: I0130 21:21:55.088584 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:22:04 crc kubenswrapper[4751]: I0130 21:22:04.725124 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:22:04 crc kubenswrapper[4751]: I0130 21:22:04.725992 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:22:19 crc kubenswrapper[4751]: I0130 21:22:19.554227 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-7bw65" podUID="07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" containerName="console" containerID="cri-o://843b3765407d83795c144aa8145fd74758bb93076619ea9f3b60eea53f7c6c68" gracePeriod=15 Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.091160 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-7bw65_07ac6020-7a19-4fd7-9daa-a7db1e3cd5df/console/0.log" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.091943 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.185520 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-config\") pod \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.185608 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-oauth-serving-cert\") pod \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.185647 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-oauth-config\") pod \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.185676 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrwzw\" (UniqueName: \"kubernetes.io/projected/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-kube-api-access-rrwzw\") pod \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.185767 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-serving-cert\") pod \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.185795 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-trusted-ca-bundle\") pod \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.185847 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-service-ca\") pod \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.186754 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-service-ca" (OuterVolumeSpecName: "service-ca") pod "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" (UID: "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.186926 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" (UID: "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.187299 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-config" (OuterVolumeSpecName: "console-config") pod "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" (UID: "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.187488 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" (UID: "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.198979 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" (UID: "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.201033 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-kube-api-access-rrwzw" (OuterVolumeSpecName: "kube-api-access-rrwzw") pod "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" (UID: "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df"). InnerVolumeSpecName "kube-api-access-rrwzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.201093 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" (UID: "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.287684 4751 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.287787 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.287810 4751 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.287829 4751 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.287849 4751 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.287868 4751 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.287887 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrwzw\" (UniqueName: \"kubernetes.io/projected/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-kube-api-access-rrwzw\") on node \"crc\" DevicePath \"\"" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.625246 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-7bw65_07ac6020-7a19-4fd7-9daa-a7db1e3cd5df/console/0.log" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.625374 4751 generic.go:334] "Generic (PLEG): container finished" podID="07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" containerID="843b3765407d83795c144aa8145fd74758bb93076619ea9f3b60eea53f7c6c68" exitCode=2 Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.625455 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7bw65" event={"ID":"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df","Type":"ContainerDied","Data":"843b3765407d83795c144aa8145fd74758bb93076619ea9f3b60eea53f7c6c68"} Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.625526 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.625561 4751 scope.go:117] "RemoveContainer" containerID="843b3765407d83795c144aa8145fd74758bb93076619ea9f3b60eea53f7c6c68" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.625537 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7bw65" event={"ID":"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df","Type":"ContainerDied","Data":"d63914e011c114b25558640a8b61cb4256ca45025b1be36724b2e0af5265302e"} Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.659957 4751 scope.go:117] "RemoveContainer" containerID="843b3765407d83795c144aa8145fd74758bb93076619ea9f3b60eea53f7c6c68" Jan 30 21:22:20 crc kubenswrapper[4751]: E0130 21:22:20.661646 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"843b3765407d83795c144aa8145fd74758bb93076619ea9f3b60eea53f7c6c68\": container with ID starting with 843b3765407d83795c144aa8145fd74758bb93076619ea9f3b60eea53f7c6c68 not found: ID does not exist" containerID="843b3765407d83795c144aa8145fd74758bb93076619ea9f3b60eea53f7c6c68" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.661747 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"843b3765407d83795c144aa8145fd74758bb93076619ea9f3b60eea53f7c6c68"} err="failed to get container status \"843b3765407d83795c144aa8145fd74758bb93076619ea9f3b60eea53f7c6c68\": rpc error: code = NotFound desc = could not find container \"843b3765407d83795c144aa8145fd74758bb93076619ea9f3b60eea53f7c6c68\": container with ID starting with 843b3765407d83795c144aa8145fd74758bb93076619ea9f3b60eea53f7c6c68 not found: ID does not exist" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.676647 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-7bw65"] Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.681285 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-7bw65"] Jan 30 21:22:21 crc kubenswrapper[4751]: I0130 21:22:21.991416 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" path="/var/lib/kubelet/pods/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df/volumes" Jan 30 21:22:24 crc kubenswrapper[4751]: I0130 21:22:24.734461 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:22:24 crc kubenswrapper[4751]: I0130 21:22:24.740868 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:22:45 crc kubenswrapper[4751]: I0130 21:22:45.088819 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:22:45 crc kubenswrapper[4751]: I0130 21:22:45.152829 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:22:45 crc kubenswrapper[4751]: I0130 21:22:45.881591 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:22:54 crc kubenswrapper[4751]: I0130 21:22:54.127087 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:22:54 crc kubenswrapper[4751]: I0130 21:22:54.128007 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:23:04 crc kubenswrapper[4751]: I0130 21:23:04.957541 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-66d88878c9-plgvh"] Jan 30 21:23:04 crc kubenswrapper[4751]: E0130 21:23:04.959308 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" containerName="console" Jan 30 21:23:04 crc kubenswrapper[4751]: I0130 21:23:04.959429 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" containerName="console" Jan 30 21:23:04 crc kubenswrapper[4751]: I0130 21:23:04.959615 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" containerName="console" Jan 30 21:23:04 crc kubenswrapper[4751]: I0130 21:23:04.960159 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:04 crc kubenswrapper[4751]: I0130 21:23:04.971205 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-66d88878c9-plgvh"] Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.026842 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-service-ca\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.026899 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-oauth-serving-cert\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.026945 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-trusted-ca-bundle\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.027066 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-oauth-config\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.027100 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkqrp\" (UniqueName: \"kubernetes.io/projected/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-kube-api-access-pkqrp\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.027274 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-serving-cert\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.027350 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-config\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.128805 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-oauth-serving-cert\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.128862 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-trusted-ca-bundle\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.128916 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-oauth-config\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.128947 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkqrp\" (UniqueName: \"kubernetes.io/projected/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-kube-api-access-pkqrp\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.128994 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-serving-cert\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.129018 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-config\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.129053 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-service-ca\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.129726 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-oauth-serving-cert\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.129876 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-service-ca\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.130067 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-trusted-ca-bundle\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.130664 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-config\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.146048 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-oauth-config\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.146064 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-serving-cert\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.159540 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkqrp\" (UniqueName: \"kubernetes.io/projected/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-kube-api-access-pkqrp\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.290933 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.593944 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-66d88878c9-plgvh"] Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.989896 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66d88878c9-plgvh" event={"ID":"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c","Type":"ContainerStarted","Data":"b8da68d24d398052e970f15800d2f73b793cd125bb65462cf36be24e202afe18"} Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.990266 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66d88878c9-plgvh" event={"ID":"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c","Type":"ContainerStarted","Data":"0e693c5eb441ca00dae4f66390c3ffc4f2ac93c82973ea2489bbd8ae4743393e"} Jan 30 21:23:06 crc kubenswrapper[4751]: I0130 21:23:06.027471 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-66d88878c9-plgvh" podStartSLOduration=2.02743935 podStartE2EDuration="2.02743935s" podCreationTimestamp="2026-01-30 21:23:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:23:06.019037074 +0000 UTC m=+524.764859763" watchObservedRunningTime="2026-01-30 21:23:06.02743935 +0000 UTC m=+524.773262039" Jan 30 21:23:15 crc kubenswrapper[4751]: I0130 21:23:15.291854 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:15 crc kubenswrapper[4751]: I0130 21:23:15.292413 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:15 crc kubenswrapper[4751]: I0130 21:23:15.300026 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:16 crc kubenswrapper[4751]: I0130 21:23:16.070137 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:16 crc kubenswrapper[4751]: I0130 21:23:16.143036 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d76f88947-6xcwf"] Jan 30 21:23:22 crc kubenswrapper[4751]: I0130 21:23:22.247473 4751 scope.go:117] "RemoveContainer" containerID="660c0699f36cdfbc8888077f14b9b8efed6cc41a8b3dc7ca02dfbf3a83512f36" Jan 30 21:23:24 crc kubenswrapper[4751]: I0130 21:23:24.127188 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:23:24 crc kubenswrapper[4751]: I0130 21:23:24.127648 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.191900 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5d76f88947-6xcwf" podUID="ac1ab634-ceee-441a-8c73-eee8464c68f6" containerName="console" containerID="cri-o://952f96e77f9dd45e8024b79a67aea877348b424583582d68c2d398258ba4346d" gracePeriod=15 Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.626048 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d76f88947-6xcwf_ac1ab634-ceee-441a-8c73-eee8464c68f6/console/0.log" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.626397 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.773002 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-serving-cert\") pod \"ac1ab634-ceee-441a-8c73-eee8464c68f6\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.773142 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-service-ca\") pod \"ac1ab634-ceee-441a-8c73-eee8464c68f6\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.773186 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-oauth-serving-cert\") pod \"ac1ab634-ceee-441a-8c73-eee8464c68f6\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.773275 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm6ks\" (UniqueName: \"kubernetes.io/projected/ac1ab634-ceee-441a-8c73-eee8464c68f6-kube-api-access-gm6ks\") pod \"ac1ab634-ceee-441a-8c73-eee8464c68f6\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.773355 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-oauth-config\") pod \"ac1ab634-ceee-441a-8c73-eee8464c68f6\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.773385 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-config\") pod \"ac1ab634-ceee-441a-8c73-eee8464c68f6\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.773417 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-trusted-ca-bundle\") pod \"ac1ab634-ceee-441a-8c73-eee8464c68f6\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.774664 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ac1ab634-ceee-441a-8c73-eee8464c68f6" (UID: "ac1ab634-ceee-441a-8c73-eee8464c68f6"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.774702 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-config" (OuterVolumeSpecName: "console-config") pod "ac1ab634-ceee-441a-8c73-eee8464c68f6" (UID: "ac1ab634-ceee-441a-8c73-eee8464c68f6"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.774722 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-service-ca" (OuterVolumeSpecName: "service-ca") pod "ac1ab634-ceee-441a-8c73-eee8464c68f6" (UID: "ac1ab634-ceee-441a-8c73-eee8464c68f6"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.774872 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ac1ab634-ceee-441a-8c73-eee8464c68f6" (UID: "ac1ab634-ceee-441a-8c73-eee8464c68f6"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.780142 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ac1ab634-ceee-441a-8c73-eee8464c68f6" (UID: "ac1ab634-ceee-441a-8c73-eee8464c68f6"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.780452 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac1ab634-ceee-441a-8c73-eee8464c68f6-kube-api-access-gm6ks" (OuterVolumeSpecName: "kube-api-access-gm6ks") pod "ac1ab634-ceee-441a-8c73-eee8464c68f6" (UID: "ac1ab634-ceee-441a-8c73-eee8464c68f6"). InnerVolumeSpecName "kube-api-access-gm6ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.787347 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ac1ab634-ceee-441a-8c73-eee8464c68f6" (UID: "ac1ab634-ceee-441a-8c73-eee8464c68f6"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.875411 4751 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.875935 4751 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.875968 4751 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.875990 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm6ks\" (UniqueName: \"kubernetes.io/projected/ac1ab634-ceee-441a-8c73-eee8464c68f6-kube-api-access-gm6ks\") on node \"crc\" DevicePath \"\"" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.876011 4751 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.876029 4751 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.876047 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:23:42 crc kubenswrapper[4751]: I0130 21:23:42.293519 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d76f88947-6xcwf_ac1ab634-ceee-441a-8c73-eee8464c68f6/console/0.log" Jan 30 21:23:42 crc kubenswrapper[4751]: I0130 21:23:42.293977 4751 generic.go:334] "Generic (PLEG): container finished" podID="ac1ab634-ceee-441a-8c73-eee8464c68f6" containerID="952f96e77f9dd45e8024b79a67aea877348b424583582d68c2d398258ba4346d" exitCode=2 Jan 30 21:23:42 crc kubenswrapper[4751]: I0130 21:23:42.294030 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d76f88947-6xcwf" event={"ID":"ac1ab634-ceee-441a-8c73-eee8464c68f6","Type":"ContainerDied","Data":"952f96e77f9dd45e8024b79a67aea877348b424583582d68c2d398258ba4346d"} Jan 30 21:23:42 crc kubenswrapper[4751]: I0130 21:23:42.294081 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d76f88947-6xcwf" event={"ID":"ac1ab634-ceee-441a-8c73-eee8464c68f6","Type":"ContainerDied","Data":"90dda4e7bb2586d629d12de8b6b6da54f391bd7370318df6020c6ebd1a54b36f"} Jan 30 21:23:42 crc kubenswrapper[4751]: I0130 21:23:42.294081 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:23:42 crc kubenswrapper[4751]: I0130 21:23:42.294109 4751 scope.go:117] "RemoveContainer" containerID="952f96e77f9dd45e8024b79a67aea877348b424583582d68c2d398258ba4346d" Jan 30 21:23:42 crc kubenswrapper[4751]: I0130 21:23:42.324523 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d76f88947-6xcwf"] Jan 30 21:23:42 crc kubenswrapper[4751]: I0130 21:23:42.326352 4751 scope.go:117] "RemoveContainer" containerID="952f96e77f9dd45e8024b79a67aea877348b424583582d68c2d398258ba4346d" Jan 30 21:23:42 crc kubenswrapper[4751]: E0130 21:23:42.327059 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"952f96e77f9dd45e8024b79a67aea877348b424583582d68c2d398258ba4346d\": container with ID starting with 952f96e77f9dd45e8024b79a67aea877348b424583582d68c2d398258ba4346d not found: ID does not exist" containerID="952f96e77f9dd45e8024b79a67aea877348b424583582d68c2d398258ba4346d" Jan 30 21:23:42 crc kubenswrapper[4751]: I0130 21:23:42.327114 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"952f96e77f9dd45e8024b79a67aea877348b424583582d68c2d398258ba4346d"} err="failed to get container status \"952f96e77f9dd45e8024b79a67aea877348b424583582d68c2d398258ba4346d\": rpc error: code = NotFound desc = could not find container \"952f96e77f9dd45e8024b79a67aea877348b424583582d68c2d398258ba4346d\": container with ID starting with 952f96e77f9dd45e8024b79a67aea877348b424583582d68c2d398258ba4346d not found: ID does not exist" Jan 30 21:23:42 crc kubenswrapper[4751]: I0130 21:23:42.332888 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5d76f88947-6xcwf"] Jan 30 21:23:43 crc kubenswrapper[4751]: I0130 21:23:43.992917 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac1ab634-ceee-441a-8c73-eee8464c68f6" path="/var/lib/kubelet/pods/ac1ab634-ceee-441a-8c73-eee8464c68f6/volumes" Jan 30 21:23:54 crc kubenswrapper[4751]: I0130 21:23:54.127116 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:23:54 crc kubenswrapper[4751]: I0130 21:23:54.128533 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:23:54 crc kubenswrapper[4751]: I0130 21:23:54.128625 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:23:54 crc kubenswrapper[4751]: I0130 21:23:54.129470 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a1b797d24a7a7f0cfe28e0e7b1326aa242a6fa28ef5d30064b33f02362b2f1a6"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:23:54 crc kubenswrapper[4751]: I0130 21:23:54.129601 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://a1b797d24a7a7f0cfe28e0e7b1326aa242a6fa28ef5d30064b33f02362b2f1a6" gracePeriod=600 Jan 30 21:23:54 crc kubenswrapper[4751]: I0130 21:23:54.402140 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="a1b797d24a7a7f0cfe28e0e7b1326aa242a6fa28ef5d30064b33f02362b2f1a6" exitCode=0 Jan 30 21:23:54 crc kubenswrapper[4751]: I0130 21:23:54.402217 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"a1b797d24a7a7f0cfe28e0e7b1326aa242a6fa28ef5d30064b33f02362b2f1a6"} Jan 30 21:23:54 crc kubenswrapper[4751]: I0130 21:23:54.402599 4751 scope.go:117] "RemoveContainer" containerID="daa3657d48b883db14b4975f24f93c0b2c6f7eb8738d3c0267f1f4f003ba63aa" Jan 30 21:23:55 crc kubenswrapper[4751]: I0130 21:23:55.411168 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"a610754d75a118a60637a1e554575fc5a5a243d54c20205f4fedf2c00e804266"} Jan 30 21:24:22 crc kubenswrapper[4751]: I0130 21:24:22.296833 4751 scope.go:117] "RemoveContainer" containerID="e0c09ac548892d16ce214d98d72512bd48e15448460ce8ae35e4043474ce58cc" Jan 30 21:24:22 crc kubenswrapper[4751]: I0130 21:24:22.329076 4751 scope.go:117] "RemoveContainer" containerID="10d79502f57ca29d080e9753142598555bcee310b2933e4570a1f0619498f923" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.301886 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2"] Jan 30 21:25:50 crc kubenswrapper[4751]: E0130 21:25:50.302877 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac1ab634-ceee-441a-8c73-eee8464c68f6" containerName="console" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.302898 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac1ab634-ceee-441a-8c73-eee8464c68f6" containerName="console" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.303127 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac1ab634-ceee-441a-8c73-eee8464c68f6" containerName="console" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.306132 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.325526 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.344290 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2"] Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.418197 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dedbc66c-13e3-4312-85e6-00d215e5f2ff-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2\" (UID: \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.418782 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf4w2\" (UniqueName: \"kubernetes.io/projected/dedbc66c-13e3-4312-85e6-00d215e5f2ff-kube-api-access-bf4w2\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2\" (UID: \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.418865 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dedbc66c-13e3-4312-85e6-00d215e5f2ff-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2\" (UID: \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.520053 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dedbc66c-13e3-4312-85e6-00d215e5f2ff-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2\" (UID: \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.520122 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf4w2\" (UniqueName: \"kubernetes.io/projected/dedbc66c-13e3-4312-85e6-00d215e5f2ff-kube-api-access-bf4w2\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2\" (UID: \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.520158 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dedbc66c-13e3-4312-85e6-00d215e5f2ff-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2\" (UID: \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.520679 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dedbc66c-13e3-4312-85e6-00d215e5f2ff-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2\" (UID: \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.520681 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dedbc66c-13e3-4312-85e6-00d215e5f2ff-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2\" (UID: \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.542783 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf4w2\" (UniqueName: \"kubernetes.io/projected/dedbc66c-13e3-4312-85e6-00d215e5f2ff-kube-api-access-bf4w2\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2\" (UID: \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.641759 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" Jan 30 21:25:51 crc kubenswrapper[4751]: I0130 21:25:51.112914 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2"] Jan 30 21:25:51 crc kubenswrapper[4751]: I0130 21:25:51.338887 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" event={"ID":"dedbc66c-13e3-4312-85e6-00d215e5f2ff","Type":"ContainerStarted","Data":"c37392c9c28591d30af6fa13864c5cce74c1af8be4cc91616fe120071a372d74"} Jan 30 21:25:51 crc kubenswrapper[4751]: I0130 21:25:51.339243 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" event={"ID":"dedbc66c-13e3-4312-85e6-00d215e5f2ff","Type":"ContainerStarted","Data":"0c06341dc64915b9ede2454cdb277fad81ddae6f920583071ae5766b060ac55d"} Jan 30 21:25:52 crc kubenswrapper[4751]: I0130 21:25:52.345437 4751 generic.go:334] "Generic (PLEG): container finished" podID="dedbc66c-13e3-4312-85e6-00d215e5f2ff" containerID="c37392c9c28591d30af6fa13864c5cce74c1af8be4cc91616fe120071a372d74" exitCode=0 Jan 30 21:25:52 crc kubenswrapper[4751]: I0130 21:25:52.346369 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" event={"ID":"dedbc66c-13e3-4312-85e6-00d215e5f2ff","Type":"ContainerDied","Data":"c37392c9c28591d30af6fa13864c5cce74c1af8be4cc91616fe120071a372d74"} Jan 30 21:25:52 crc kubenswrapper[4751]: I0130 21:25:52.349082 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 21:25:54 crc kubenswrapper[4751]: I0130 21:25:54.126835 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:25:54 crc kubenswrapper[4751]: I0130 21:25:54.127520 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:25:54 crc kubenswrapper[4751]: I0130 21:25:54.362632 4751 generic.go:334] "Generic (PLEG): container finished" podID="dedbc66c-13e3-4312-85e6-00d215e5f2ff" containerID="8d83fa7db634ce7a7858c27562ccdf062d9dea0a838bb5aacc88523290613dfc" exitCode=0 Jan 30 21:25:54 crc kubenswrapper[4751]: I0130 21:25:54.362704 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" event={"ID":"dedbc66c-13e3-4312-85e6-00d215e5f2ff","Type":"ContainerDied","Data":"8d83fa7db634ce7a7858c27562ccdf062d9dea0a838bb5aacc88523290613dfc"} Jan 30 21:25:55 crc kubenswrapper[4751]: I0130 21:25:55.373736 4751 generic.go:334] "Generic (PLEG): container finished" podID="dedbc66c-13e3-4312-85e6-00d215e5f2ff" containerID="8fca4ce58dcc1f6c42dc0ef9782db856f25df74c010a55261aa5d6ba4308f0b1" exitCode=0 Jan 30 21:25:55 crc kubenswrapper[4751]: I0130 21:25:55.373788 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" event={"ID":"dedbc66c-13e3-4312-85e6-00d215e5f2ff","Type":"ContainerDied","Data":"8fca4ce58dcc1f6c42dc0ef9782db856f25df74c010a55261aa5d6ba4308f0b1"} Jan 30 21:25:56 crc kubenswrapper[4751]: I0130 21:25:56.644399 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" Jan 30 21:25:56 crc kubenswrapper[4751]: I0130 21:25:56.719232 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dedbc66c-13e3-4312-85e6-00d215e5f2ff-util\") pod \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\" (UID: \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\") " Jan 30 21:25:56 crc kubenswrapper[4751]: I0130 21:25:56.719437 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf4w2\" (UniqueName: \"kubernetes.io/projected/dedbc66c-13e3-4312-85e6-00d215e5f2ff-kube-api-access-bf4w2\") pod \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\" (UID: \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\") " Jan 30 21:25:56 crc kubenswrapper[4751]: I0130 21:25:56.720643 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dedbc66c-13e3-4312-85e6-00d215e5f2ff-bundle\") pod \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\" (UID: \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\") " Jan 30 21:25:56 crc kubenswrapper[4751]: I0130 21:25:56.724424 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dedbc66c-13e3-4312-85e6-00d215e5f2ff-bundle" (OuterVolumeSpecName: "bundle") pod "dedbc66c-13e3-4312-85e6-00d215e5f2ff" (UID: "dedbc66c-13e3-4312-85e6-00d215e5f2ff"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:25:56 crc kubenswrapper[4751]: I0130 21:25:56.726496 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dedbc66c-13e3-4312-85e6-00d215e5f2ff-kube-api-access-bf4w2" (OuterVolumeSpecName: "kube-api-access-bf4w2") pod "dedbc66c-13e3-4312-85e6-00d215e5f2ff" (UID: "dedbc66c-13e3-4312-85e6-00d215e5f2ff"). InnerVolumeSpecName "kube-api-access-bf4w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:25:56 crc kubenswrapper[4751]: I0130 21:25:56.737266 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dedbc66c-13e3-4312-85e6-00d215e5f2ff-util" (OuterVolumeSpecName: "util") pod "dedbc66c-13e3-4312-85e6-00d215e5f2ff" (UID: "dedbc66c-13e3-4312-85e6-00d215e5f2ff"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:25:56 crc kubenswrapper[4751]: I0130 21:25:56.823256 4751 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dedbc66c-13e3-4312-85e6-00d215e5f2ff-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:25:56 crc kubenswrapper[4751]: I0130 21:25:56.823320 4751 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dedbc66c-13e3-4312-85e6-00d215e5f2ff-util\") on node \"crc\" DevicePath \"\"" Jan 30 21:25:56 crc kubenswrapper[4751]: I0130 21:25:56.823381 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf4w2\" (UniqueName: \"kubernetes.io/projected/dedbc66c-13e3-4312-85e6-00d215e5f2ff-kube-api-access-bf4w2\") on node \"crc\" DevicePath \"\"" Jan 30 21:25:57 crc kubenswrapper[4751]: I0130 21:25:57.389387 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" event={"ID":"dedbc66c-13e3-4312-85e6-00d215e5f2ff","Type":"ContainerDied","Data":"0c06341dc64915b9ede2454cdb277fad81ddae6f920583071ae5766b060ac55d"} Jan 30 21:25:57 crc kubenswrapper[4751]: I0130 21:25:57.389445 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c06341dc64915b9ede2454cdb277fad81ddae6f920583071ae5766b060ac55d" Jan 30 21:25:57 crc kubenswrapper[4751]: I0130 21:25:57.389476 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.639275 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-5nv4n"] Jan 30 21:26:04 crc kubenswrapper[4751]: E0130 21:26:04.640116 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dedbc66c-13e3-4312-85e6-00d215e5f2ff" containerName="extract" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.640131 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="dedbc66c-13e3-4312-85e6-00d215e5f2ff" containerName="extract" Jan 30 21:26:04 crc kubenswrapper[4751]: E0130 21:26:04.640146 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dedbc66c-13e3-4312-85e6-00d215e5f2ff" containerName="util" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.640153 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="dedbc66c-13e3-4312-85e6-00d215e5f2ff" containerName="util" Jan 30 21:26:04 crc kubenswrapper[4751]: E0130 21:26:04.640164 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dedbc66c-13e3-4312-85e6-00d215e5f2ff" containerName="pull" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.640170 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="dedbc66c-13e3-4312-85e6-00d215e5f2ff" containerName="pull" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.640280 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="dedbc66c-13e3-4312-85e6-00d215e5f2ff" containerName="extract" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.640811 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5nv4n" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.643237 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-4wgck" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.643340 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.643386 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.653142 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-5nv4n"] Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.743313 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4"] Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.744037 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.746141 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-wrgzj" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.747005 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.752015 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2"] Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.752894 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.793234 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4"] Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.820898 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2"] Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.832528 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b7dz\" (UniqueName: \"kubernetes.io/projected/96f3e554-fbfc-4716-b6ee-0913394521fa-kube-api-access-8b7dz\") pod \"obo-prometheus-operator-68bc856cb9-5nv4n\" (UID: \"96f3e554-fbfc-4716-b6ee-0913394521fa\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5nv4n" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.832574 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c0edc270-3913-41f7-9218-32549d1d3dea-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-858b879-n4cw4\" (UID: \"c0edc270-3913-41f7-9218-32549d1d3dea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.832604 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c0edc270-3913-41f7-9218-32549d1d3dea-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-858b879-n4cw4\" (UID: \"c0edc270-3913-41f7-9218-32549d1d3dea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.832651 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/16999302-ac18-4e1c-b3f7-a2bf3f7605aa-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-858b879-vpng2\" (UID: \"16999302-ac18-4e1c-b3f7-a2bf3f7605aa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.832688 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/16999302-ac18-4e1c-b3f7-a2bf3f7605aa-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-858b879-vpng2\" (UID: \"16999302-ac18-4e1c-b3f7-a2bf3f7605aa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.933890 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/16999302-ac18-4e1c-b3f7-a2bf3f7605aa-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-858b879-vpng2\" (UID: \"16999302-ac18-4e1c-b3f7-a2bf3f7605aa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.934015 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/16999302-ac18-4e1c-b3f7-a2bf3f7605aa-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-858b879-vpng2\" (UID: \"16999302-ac18-4e1c-b3f7-a2bf3f7605aa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.934182 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b7dz\" (UniqueName: \"kubernetes.io/projected/96f3e554-fbfc-4716-b6ee-0913394521fa-kube-api-access-8b7dz\") pod \"obo-prometheus-operator-68bc856cb9-5nv4n\" (UID: \"96f3e554-fbfc-4716-b6ee-0913394521fa\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5nv4n" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.934232 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c0edc270-3913-41f7-9218-32549d1d3dea-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-858b879-n4cw4\" (UID: \"c0edc270-3913-41f7-9218-32549d1d3dea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.934281 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c0edc270-3913-41f7-9218-32549d1d3dea-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-858b879-n4cw4\" (UID: \"c0edc270-3913-41f7-9218-32549d1d3dea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.940553 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c0edc270-3913-41f7-9218-32549d1d3dea-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-858b879-n4cw4\" (UID: \"c0edc270-3913-41f7-9218-32549d1d3dea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.941759 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/16999302-ac18-4e1c-b3f7-a2bf3f7605aa-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-858b879-vpng2\" (UID: \"16999302-ac18-4e1c-b3f7-a2bf3f7605aa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.946526 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-lhkl2"] Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.947369 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-lhkl2" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.950383 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.950398 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-9n6n6" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.950543 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/16999302-ac18-4e1c-b3f7-a2bf3f7605aa-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-858b879-vpng2\" (UID: \"16999302-ac18-4e1c-b3f7-a2bf3f7605aa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.951009 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c0edc270-3913-41f7-9218-32549d1d3dea-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-858b879-n4cw4\" (UID: \"c0edc270-3913-41f7-9218-32549d1d3dea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.963401 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-lhkl2"] Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.964889 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b7dz\" (UniqueName: \"kubernetes.io/projected/96f3e554-fbfc-4716-b6ee-0913394521fa-kube-api-access-8b7dz\") pod \"obo-prometheus-operator-68bc856cb9-5nv4n\" (UID: \"96f3e554-fbfc-4716-b6ee-0913394521fa\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5nv4n" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.035104 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ee6b659-c8c9-4f07-a897-c69db812f880-observability-operator-tls\") pod \"observability-operator-59bdc8b94-lhkl2\" (UID: \"3ee6b659-c8c9-4f07-a897-c69db812f880\") " pod="openshift-operators/observability-operator-59bdc8b94-lhkl2" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.035619 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gs2v\" (UniqueName: \"kubernetes.io/projected/3ee6b659-c8c9-4f07-a897-c69db812f880-kube-api-access-2gs2v\") pod \"observability-operator-59bdc8b94-lhkl2\" (UID: \"3ee6b659-c8c9-4f07-a897-c69db812f880\") " pod="openshift-operators/observability-operator-59bdc8b94-lhkl2" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.058242 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-l498d"] Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.059183 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-l498d" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.059742 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.062785 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-d86tn" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.079314 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-l498d"] Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.108193 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.136976 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ee6b659-c8c9-4f07-a897-c69db812f880-observability-operator-tls\") pod \"observability-operator-59bdc8b94-lhkl2\" (UID: \"3ee6b659-c8c9-4f07-a897-c69db812f880\") " pod="openshift-operators/observability-operator-59bdc8b94-lhkl2" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.137067 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gs2v\" (UniqueName: \"kubernetes.io/projected/3ee6b659-c8c9-4f07-a897-c69db812f880-kube-api-access-2gs2v\") pod \"observability-operator-59bdc8b94-lhkl2\" (UID: \"3ee6b659-c8c9-4f07-a897-c69db812f880\") " pod="openshift-operators/observability-operator-59bdc8b94-lhkl2" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.145129 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ee6b659-c8c9-4f07-a897-c69db812f880-observability-operator-tls\") pod \"observability-operator-59bdc8b94-lhkl2\" (UID: \"3ee6b659-c8c9-4f07-a897-c69db812f880\") " pod="openshift-operators/observability-operator-59bdc8b94-lhkl2" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.157157 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gs2v\" (UniqueName: \"kubernetes.io/projected/3ee6b659-c8c9-4f07-a897-c69db812f880-kube-api-access-2gs2v\") pod \"observability-operator-59bdc8b94-lhkl2\" (UID: \"3ee6b659-c8c9-4f07-a897-c69db812f880\") " pod="openshift-operators/observability-operator-59bdc8b94-lhkl2" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.238161 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7472790e-3a0e-40dd-909c-4301ba84d884-openshift-service-ca\") pod \"perses-operator-5bf474d74f-l498d\" (UID: \"7472790e-3a0e-40dd-909c-4301ba84d884\") " pod="openshift-operators/perses-operator-5bf474d74f-l498d" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.243097 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7mbs\" (UniqueName: \"kubernetes.io/projected/7472790e-3a0e-40dd-909c-4301ba84d884-kube-api-access-p7mbs\") pod \"perses-operator-5bf474d74f-l498d\" (UID: \"7472790e-3a0e-40dd-909c-4301ba84d884\") " pod="openshift-operators/perses-operator-5bf474d74f-l498d" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.260396 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5nv4n" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.305817 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-lhkl2" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.344633 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7mbs\" (UniqueName: \"kubernetes.io/projected/7472790e-3a0e-40dd-909c-4301ba84d884-kube-api-access-p7mbs\") pod \"perses-operator-5bf474d74f-l498d\" (UID: \"7472790e-3a0e-40dd-909c-4301ba84d884\") " pod="openshift-operators/perses-operator-5bf474d74f-l498d" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.344706 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7472790e-3a0e-40dd-909c-4301ba84d884-openshift-service-ca\") pod \"perses-operator-5bf474d74f-l498d\" (UID: \"7472790e-3a0e-40dd-909c-4301ba84d884\") " pod="openshift-operators/perses-operator-5bf474d74f-l498d" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.345877 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7472790e-3a0e-40dd-909c-4301ba84d884-openshift-service-ca\") pod \"perses-operator-5bf474d74f-l498d\" (UID: \"7472790e-3a0e-40dd-909c-4301ba84d884\") " pod="openshift-operators/perses-operator-5bf474d74f-l498d" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.369174 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7mbs\" (UniqueName: \"kubernetes.io/projected/7472790e-3a0e-40dd-909c-4301ba84d884-kube-api-access-p7mbs\") pod \"perses-operator-5bf474d74f-l498d\" (UID: \"7472790e-3a0e-40dd-909c-4301ba84d884\") " pod="openshift-operators/perses-operator-5bf474d74f-l498d" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.376629 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-l498d" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.507862 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-5nv4n"] Jan 30 21:26:05 crc kubenswrapper[4751]: W0130 21:26:05.530863 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96f3e554_fbfc_4716_b6ee_0913394521fa.slice/crio-949d902b6d67c4f5a61b1a068286961454aaf38e30921e303fac5263292ec944 WatchSource:0}: Error finding container 949d902b6d67c4f5a61b1a068286961454aaf38e30921e303fac5263292ec944: Status 404 returned error can't find the container with id 949d902b6d67c4f5a61b1a068286961454aaf38e30921e303fac5263292ec944 Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.536662 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4"] Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.644165 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2"] Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.656214 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-lhkl2"] Jan 30 21:26:05 crc kubenswrapper[4751]: W0130 21:26:05.666843 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16999302_ac18_4e1c_b3f7_a2bf3f7605aa.slice/crio-83e3f2296f955a9be2b1a67bdbd3576ffca2eba895af0ebcd86ec1f6c1089c9f WatchSource:0}: Error finding container 83e3f2296f955a9be2b1a67bdbd3576ffca2eba895af0ebcd86ec1f6c1089c9f: Status 404 returned error can't find the container with id 83e3f2296f955a9be2b1a67bdbd3576ffca2eba895af0ebcd86ec1f6c1089c9f Jan 30 21:26:05 crc kubenswrapper[4751]: W0130 21:26:05.669354 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ee6b659_c8c9_4f07_a897_c69db812f880.slice/crio-4496ba71d84aa36823bb2c8f1b3c87033040f0b962ca6ecb0d5c9d87ad7d0ddd WatchSource:0}: Error finding container 4496ba71d84aa36823bb2c8f1b3c87033040f0b962ca6ecb0d5c9d87ad7d0ddd: Status 404 returned error can't find the container with id 4496ba71d84aa36823bb2c8f1b3c87033040f0b962ca6ecb0d5c9d87ad7d0ddd Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.677144 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-l498d"] Jan 30 21:26:06 crc kubenswrapper[4751]: I0130 21:26:06.468581 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5nv4n" event={"ID":"96f3e554-fbfc-4716-b6ee-0913394521fa","Type":"ContainerStarted","Data":"949d902b6d67c4f5a61b1a068286961454aaf38e30921e303fac5263292ec944"} Jan 30 21:26:06 crc kubenswrapper[4751]: I0130 21:26:06.470138 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2" event={"ID":"16999302-ac18-4e1c-b3f7-a2bf3f7605aa","Type":"ContainerStarted","Data":"83e3f2296f955a9be2b1a67bdbd3576ffca2eba895af0ebcd86ec1f6c1089c9f"} Jan 30 21:26:06 crc kubenswrapper[4751]: I0130 21:26:06.475308 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4" event={"ID":"c0edc270-3913-41f7-9218-32549d1d3dea","Type":"ContainerStarted","Data":"23c8ac9fcac0d71da8c3a971fe96facf412acf786d010f30524c5287167be801"} Jan 30 21:26:06 crc kubenswrapper[4751]: I0130 21:26:06.476870 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-l498d" event={"ID":"7472790e-3a0e-40dd-909c-4301ba84d884","Type":"ContainerStarted","Data":"b74f88f83d8c31b942ad99651a96cb612084b98ee59a4f260755b1bdf3ec022e"} Jan 30 21:26:06 crc kubenswrapper[4751]: I0130 21:26:06.479308 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-lhkl2" event={"ID":"3ee6b659-c8c9-4f07-a897-c69db812f880","Type":"ContainerStarted","Data":"4496ba71d84aa36823bb2c8f1b3c87033040f0b962ca6ecb0d5c9d87ad7d0ddd"} Jan 30 21:26:16 crc kubenswrapper[4751]: I0130 21:26:16.578885 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-lhkl2" event={"ID":"3ee6b659-c8c9-4f07-a897-c69db812f880","Type":"ContainerStarted","Data":"9dff2d8751b0c8cac946f8d6e1f8f36d0b1fc633e1b11b9736ef09f658d4ab62"} Jan 30 21:26:16 crc kubenswrapper[4751]: I0130 21:26:16.580903 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-lhkl2" Jan 30 21:26:16 crc kubenswrapper[4751]: I0130 21:26:16.581017 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5nv4n" event={"ID":"96f3e554-fbfc-4716-b6ee-0913394521fa","Type":"ContainerStarted","Data":"0e29c1f50f84f784c4cd4752cb0c71c1cfc9994fe9c44094aefd37bd472a20d5"} Jan 30 21:26:16 crc kubenswrapper[4751]: I0130 21:26:16.583660 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-lhkl2" Jan 30 21:26:16 crc kubenswrapper[4751]: I0130 21:26:16.588990 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2" event={"ID":"16999302-ac18-4e1c-b3f7-a2bf3f7605aa","Type":"ContainerStarted","Data":"ce9bf48055f9b949d19c3c2307f0647e5f4b63ed4152fdccb2220c35d3f63b84"} Jan 30 21:26:16 crc kubenswrapper[4751]: I0130 21:26:16.592797 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4" event={"ID":"c0edc270-3913-41f7-9218-32549d1d3dea","Type":"ContainerStarted","Data":"a6efbbe954785342a4095a4b1c0533f1ca1cfb4f82c1a19d7986a93d67697626"} Jan 30 21:26:16 crc kubenswrapper[4751]: I0130 21:26:16.594754 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-l498d" event={"ID":"7472790e-3a0e-40dd-909c-4301ba84d884","Type":"ContainerStarted","Data":"269c5af3098a454b4b3ce5dc2887ae4029e396e561c29fce7c140be2274b7fdf"} Jan 30 21:26:16 crc kubenswrapper[4751]: I0130 21:26:16.594934 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-l498d" Jan 30 21:26:16 crc kubenswrapper[4751]: I0130 21:26:16.602411 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-lhkl2" podStartSLOduration=2.500794385 podStartE2EDuration="12.602310916s" podCreationTimestamp="2026-01-30 21:26:04 +0000 UTC" firstStartedPulling="2026-01-30 21:26:05.673306372 +0000 UTC m=+704.419129021" lastFinishedPulling="2026-01-30 21:26:15.774822893 +0000 UTC m=+714.520645552" observedRunningTime="2026-01-30 21:26:16.598749716 +0000 UTC m=+715.344572375" watchObservedRunningTime="2026-01-30 21:26:16.602310916 +0000 UTC m=+715.348133575" Jan 30 21:26:16 crc kubenswrapper[4751]: I0130 21:26:16.620292 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2" podStartSLOduration=2.573603574 podStartE2EDuration="12.620273235s" podCreationTimestamp="2026-01-30 21:26:04 +0000 UTC" firstStartedPulling="2026-01-30 21:26:05.67398971 +0000 UTC m=+704.419812359" lastFinishedPulling="2026-01-30 21:26:15.720659371 +0000 UTC m=+714.466482020" observedRunningTime="2026-01-30 21:26:16.617584227 +0000 UTC m=+715.363406916" watchObservedRunningTime="2026-01-30 21:26:16.620273235 +0000 UTC m=+715.366095884" Jan 30 21:26:16 crc kubenswrapper[4751]: I0130 21:26:16.652375 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5nv4n" podStartSLOduration=2.44893171 podStartE2EDuration="12.652349814s" podCreationTimestamp="2026-01-30 21:26:04 +0000 UTC" firstStartedPulling="2026-01-30 21:26:05.535941475 +0000 UTC m=+704.281764124" lastFinishedPulling="2026-01-30 21:26:15.739359559 +0000 UTC m=+714.485182228" observedRunningTime="2026-01-30 21:26:16.643468028 +0000 UTC m=+715.389290707" watchObservedRunningTime="2026-01-30 21:26:16.652349814 +0000 UTC m=+715.398172503" Jan 30 21:26:16 crc kubenswrapper[4751]: I0130 21:26:16.664475 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-l498d" podStartSLOduration=1.589495919 podStartE2EDuration="11.664454393s" podCreationTimestamp="2026-01-30 21:26:05 +0000 UTC" firstStartedPulling="2026-01-30 21:26:05.690030907 +0000 UTC m=+704.435853556" lastFinishedPulling="2026-01-30 21:26:15.764989371 +0000 UTC m=+714.510812030" observedRunningTime="2026-01-30 21:26:16.659371623 +0000 UTC m=+715.405194272" watchObservedRunningTime="2026-01-30 21:26:16.664454393 +0000 UTC m=+715.410277062" Jan 30 21:26:16 crc kubenswrapper[4751]: I0130 21:26:16.681553 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4" podStartSLOduration=2.508800931 podStartE2EDuration="12.681527518s" podCreationTimestamp="2026-01-30 21:26:04 +0000 UTC" firstStartedPulling="2026-01-30 21:26:05.551046497 +0000 UTC m=+704.296869146" lastFinishedPulling="2026-01-30 21:26:15.723773094 +0000 UTC m=+714.469595733" observedRunningTime="2026-01-30 21:26:16.679390745 +0000 UTC m=+715.425213404" watchObservedRunningTime="2026-01-30 21:26:16.681527518 +0000 UTC m=+715.427350197" Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.313928 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n8bjd"] Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.314816 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovn-controller" containerID="cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2" gracePeriod=30 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.314860 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="nbdb" containerID="cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568" gracePeriod=30 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.314924 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovn-acl-logging" containerID="cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06" gracePeriod=30 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.314919 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37" gracePeriod=30 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.314955 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="kube-rbac-proxy-node" containerID="cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2" gracePeriod=30 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.314967 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="northd" containerID="cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753" gracePeriod=30 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.315092 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="sbdb" containerID="cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4" gracePeriod=30 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.351950 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" containerID="cri-o://54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982" gracePeriod=30 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.632539 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5sgk2_bcecdc4b-6607-4e4e-a9b5-49b85c030d21/kube-multus/2.log" Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.633514 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5sgk2_bcecdc4b-6607-4e4e-a9b5-49b85c030d21/kube-multus/1.log" Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.633557 4751 generic.go:334] "Generic (PLEG): container finished" podID="bcecdc4b-6607-4e4e-a9b5-49b85c030d21" containerID="83b2f589d316b2b21ef50ee0174ac43309d977d8244dba740216ca2dd67db344" exitCode=2 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.633598 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5sgk2" event={"ID":"bcecdc4b-6607-4e4e-a9b5-49b85c030d21","Type":"ContainerDied","Data":"83b2f589d316b2b21ef50ee0174ac43309d977d8244dba740216ca2dd67db344"} Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.633662 4751 scope.go:117] "RemoveContainer" containerID="2c6ea3db26de86b678d2306adc7f90c1d03797d9dd14847d766d709276053d02" Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.634108 4751 scope.go:117] "RemoveContainer" containerID="83b2f589d316b2b21ef50ee0174ac43309d977d8244dba740216ca2dd67db344" Jan 30 21:26:21 crc kubenswrapper[4751]: E0130 21:26:21.634511 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-5sgk2_openshift-multus(bcecdc4b-6607-4e4e-a9b5-49b85c030d21)\"" pod="openshift-multus/multus-5sgk2" podUID="bcecdc4b-6607-4e4e-a9b5-49b85c030d21" Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.635817 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/3.log" Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.637513 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovn-acl-logging/0.log" Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.638005 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovn-controller/0.log" Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.638394 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerID="54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982" exitCode=0 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.638419 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerID="a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4" exitCode=0 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.638431 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerID="fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568" exitCode=0 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.638440 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerID="e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06" exitCode=143 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.638450 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerID="cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2" exitCode=143 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.638470 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982"} Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.638497 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4"} Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.638512 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568"} Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.638523 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06"} Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.638535 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2"} Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.656753 4751 scope.go:117] "RemoveContainer" containerID="959e2d34bf4d2470d1737891bbe3d8704e887259d95ea026e0467f531587bd29" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.537888 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovn-acl-logging/0.log" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.538369 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovn-controller/0.log" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.538877 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.588973 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vnh74"] Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.589220 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovn-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589235 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovn-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.589246 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="northd" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589252 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="northd" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.589268 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589274 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.589281 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589287 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.589297 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589304 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.589313 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="nbdb" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589335 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="nbdb" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.589349 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589356 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.589363 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589368 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.589376 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="kube-rbac-proxy-node" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589381 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="kube-rbac-proxy-node" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.589388 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovn-acl-logging" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589395 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovn-acl-logging" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.589407 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="kubecfg-setup" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589413 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="kubecfg-setup" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.589420 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="sbdb" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589426 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="sbdb" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589520 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589534 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovn-acl-logging" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589540 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589546 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="nbdb" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589554 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="northd" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589562 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="sbdb" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589571 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589578 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589585 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovn-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589593 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589603 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="kube-rbac-proxy-node" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.589712 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589719 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589819 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.591607 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.643904 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5sgk2_bcecdc4b-6607-4e4e-a9b5-49b85c030d21/kube-multus/2.log" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.646558 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovn-acl-logging/0.log" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.646972 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovn-controller/0.log" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.647255 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerID="5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753" exitCode=0 Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.647281 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerID="29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37" exitCode=0 Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.647291 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerID="661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2" exitCode=0 Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.647310 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753"} Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.647343 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37"} Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.647354 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2"} Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.647362 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"58211f7a0c83df8b70ab0c94abdcc3c0824047a0aa11f216a22dea02a287a2b0"} Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.647376 4751 scope.go:117] "RemoveContainer" containerID="54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.647503 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.656907 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-log-socket\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.656950 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-openvswitch\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.656982 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-cni-netd\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.656997 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-ovn\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657020 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-systemd-units\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657077 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657092 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-systemd\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657113 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-node-log\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657131 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-run-netns\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657144 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-run-ovn-kubernetes\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657185 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-etc-openvswitch\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657210 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovnkube-config\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657241 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovnkube-script-lib\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657257 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-env-overrides\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657273 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovn-node-metrics-cert\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657292 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-cni-bin\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657307 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-kubelet\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657339 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8497\" (UniqueName: \"kubernetes.io/projected/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-kube-api-access-s8497\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657387 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-slash\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657405 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-var-lib-openvswitch\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657544 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-systemd-units\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657562 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-var-lib-openvswitch\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657582 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657606 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-run-openvswitch\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657623 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-run-ovn-kubernetes\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657637 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-cni-netd\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657656 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-cni-bin\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657672 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-etc-openvswitch\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657687 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-slash\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657702 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-node-log\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657733 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-ovn-node-metrics-cert\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657747 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-log-socket\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657763 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-run-ovn\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657781 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-ovnkube-script-lib\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657804 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-ovnkube-config\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657818 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-run-systemd\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657841 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-kubelet\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657866 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-env-overrides\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657879 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qbt4\" (UniqueName: \"kubernetes.io/projected/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-kube-api-access-2qbt4\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657906 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-run-netns\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658069 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-log-socket" (OuterVolumeSpecName: "log-socket") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658093 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658111 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658126 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658142 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658161 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658614 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-node-log" (OuterVolumeSpecName: "node-log") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658640 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658657 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658707 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658715 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-slash" (OuterVolumeSpecName: "host-slash") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658712 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658743 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658769 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.659147 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.659164 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.659300 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.662979 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-kube-api-access-s8497" (OuterVolumeSpecName: "kube-api-access-s8497") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "kube-api-access-s8497". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.663031 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.664774 4751 scope.go:117] "RemoveContainer" containerID="a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.671470 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.714887 4751 scope.go:117] "RemoveContainer" containerID="fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.729824 4751 scope.go:117] "RemoveContainer" containerID="5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.746249 4751 scope.go:117] "RemoveContainer" containerID="29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759300 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-ovnkube-script-lib\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759358 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-ovnkube-config\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759380 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-run-systemd\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759404 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-kubelet\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759438 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-env-overrides\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759456 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qbt4\" (UniqueName: \"kubernetes.io/projected/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-kube-api-access-2qbt4\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759492 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-run-netns\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759519 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-systemd-units\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759540 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-var-lib-openvswitch\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759562 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759586 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-run-openvswitch\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759604 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-run-ovn-kubernetes\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759618 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-cni-netd\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759637 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-cni-bin\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759667 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-etc-openvswitch\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759684 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-slash\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759700 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-node-log\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759728 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-ovn-node-metrics-cert\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759743 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-log-socket\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759760 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-run-ovn\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759799 4751 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759809 4751 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759818 4751 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759828 4751 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759837 4751 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759846 4751 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759854 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8497\" (UniqueName: \"kubernetes.io/projected/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-kube-api-access-s8497\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759863 4751 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-slash\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759871 4751 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759881 4751 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-log-socket\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759888 4751 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759897 4751 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759904 4751 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759912 4751 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759920 4751 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759927 4751 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759935 4751 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-node-log\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759943 4751 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759951 4751 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759961 4751 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759997 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-run-ovn\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.760334 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.760821 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-ovnkube-script-lib\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.760852 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-run-openvswitch\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.760874 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-run-ovn-kubernetes\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.760894 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-cni-netd\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.760941 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-cni-bin\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.760960 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-etc-openvswitch\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.760979 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-slash\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.760997 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-node-log\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.761923 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-ovnkube-config\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.762630 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-env-overrides\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.762925 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-run-netns\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.763007 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-systemd-units\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.763087 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-var-lib-openvswitch\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.763175 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-kubelet\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.763252 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-run-systemd\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.763345 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-log-socket\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.772260 4751 scope.go:117] "RemoveContainer" containerID="661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.780489 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-ovn-node-metrics-cert\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.787753 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qbt4\" (UniqueName: \"kubernetes.io/projected/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-kube-api-access-2qbt4\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.794238 4751 scope.go:117] "RemoveContainer" containerID="e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.815664 4751 scope.go:117] "RemoveContainer" containerID="cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.831425 4751 scope.go:117] "RemoveContainer" containerID="4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.844553 4751 scope.go:117] "RemoveContainer" containerID="54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.845045 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982\": container with ID starting with 54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982 not found: ID does not exist" containerID="54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.845075 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982"} err="failed to get container status \"54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982\": rpc error: code = NotFound desc = could not find container \"54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982\": container with ID starting with 54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.845099 4751 scope.go:117] "RemoveContainer" containerID="a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.845576 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\": container with ID starting with a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4 not found: ID does not exist" containerID="a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.845606 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4"} err="failed to get container status \"a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\": rpc error: code = NotFound desc = could not find container \"a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\": container with ID starting with a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.845628 4751 scope.go:117] "RemoveContainer" containerID="fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.845878 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\": container with ID starting with fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568 not found: ID does not exist" containerID="fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.845921 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568"} err="failed to get container status \"fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\": rpc error: code = NotFound desc = could not find container \"fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\": container with ID starting with fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.845949 4751 scope.go:117] "RemoveContainer" containerID="5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.846216 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\": container with ID starting with 5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753 not found: ID does not exist" containerID="5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.846240 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753"} err="failed to get container status \"5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\": rpc error: code = NotFound desc = could not find container \"5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\": container with ID starting with 5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.846256 4751 scope.go:117] "RemoveContainer" containerID="29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.846516 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\": container with ID starting with 29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37 not found: ID does not exist" containerID="29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.846543 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37"} err="failed to get container status \"29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\": rpc error: code = NotFound desc = could not find container \"29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\": container with ID starting with 29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.846557 4751 scope.go:117] "RemoveContainer" containerID="661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.846762 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\": container with ID starting with 661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2 not found: ID does not exist" containerID="661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.846783 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2"} err="failed to get container status \"661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\": rpc error: code = NotFound desc = could not find container \"661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\": container with ID starting with 661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.846798 4751 scope.go:117] "RemoveContainer" containerID="e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.847029 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\": container with ID starting with e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06 not found: ID does not exist" containerID="e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.847053 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06"} err="failed to get container status \"e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\": rpc error: code = NotFound desc = could not find container \"e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\": container with ID starting with e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.847066 4751 scope.go:117] "RemoveContainer" containerID="cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.847264 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\": container with ID starting with cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2 not found: ID does not exist" containerID="cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.847288 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2"} err="failed to get container status \"cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\": rpc error: code = NotFound desc = could not find container \"cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\": container with ID starting with cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.847301 4751 scope.go:117] "RemoveContainer" containerID="4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.847714 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\": container with ID starting with 4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea not found: ID does not exist" containerID="4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.847739 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea"} err="failed to get container status \"4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\": rpc error: code = NotFound desc = could not find container \"4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\": container with ID starting with 4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.847752 4751 scope.go:117] "RemoveContainer" containerID="54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.847955 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982"} err="failed to get container status \"54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982\": rpc error: code = NotFound desc = could not find container \"54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982\": container with ID starting with 54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.847977 4751 scope.go:117] "RemoveContainer" containerID="a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.848202 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4"} err="failed to get container status \"a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\": rpc error: code = NotFound desc = could not find container \"a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\": container with ID starting with a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.848221 4751 scope.go:117] "RemoveContainer" containerID="fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.848445 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568"} err="failed to get container status \"fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\": rpc error: code = NotFound desc = could not find container \"fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\": container with ID starting with fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.848466 4751 scope.go:117] "RemoveContainer" containerID="5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.848688 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753"} err="failed to get container status \"5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\": rpc error: code = NotFound desc = could not find container \"5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\": container with ID starting with 5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.848710 4751 scope.go:117] "RemoveContainer" containerID="29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.848923 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37"} err="failed to get container status \"29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\": rpc error: code = NotFound desc = could not find container \"29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\": container with ID starting with 29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.848942 4751 scope.go:117] "RemoveContainer" containerID="661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.849139 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2"} err="failed to get container status \"661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\": rpc error: code = NotFound desc = could not find container \"661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\": container with ID starting with 661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.849165 4751 scope.go:117] "RemoveContainer" containerID="e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.849397 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06"} err="failed to get container status \"e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\": rpc error: code = NotFound desc = could not find container \"e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\": container with ID starting with e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.849416 4751 scope.go:117] "RemoveContainer" containerID="cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.849630 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2"} err="failed to get container status \"cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\": rpc error: code = NotFound desc = could not find container \"cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\": container with ID starting with cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.849655 4751 scope.go:117] "RemoveContainer" containerID="4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.849847 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea"} err="failed to get container status \"4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\": rpc error: code = NotFound desc = could not find container \"4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\": container with ID starting with 4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.849866 4751 scope.go:117] "RemoveContainer" containerID="54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.850064 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982"} err="failed to get container status \"54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982\": rpc error: code = NotFound desc = could not find container \"54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982\": container with ID starting with 54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.850085 4751 scope.go:117] "RemoveContainer" containerID="a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.850283 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4"} err="failed to get container status \"a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\": rpc error: code = NotFound desc = could not find container \"a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\": container with ID starting with a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.850302 4751 scope.go:117] "RemoveContainer" containerID="fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.850579 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568"} err="failed to get container status \"fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\": rpc error: code = NotFound desc = could not find container \"fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\": container with ID starting with fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.850601 4751 scope.go:117] "RemoveContainer" containerID="5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.850807 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753"} err="failed to get container status \"5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\": rpc error: code = NotFound desc = could not find container \"5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\": container with ID starting with 5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.850840 4751 scope.go:117] "RemoveContainer" containerID="29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.851248 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37"} err="failed to get container status \"29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\": rpc error: code = NotFound desc = could not find container \"29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\": container with ID starting with 29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.851267 4751 scope.go:117] "RemoveContainer" containerID="661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.851559 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2"} err="failed to get container status \"661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\": rpc error: code = NotFound desc = could not find container \"661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\": container with ID starting with 661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.851579 4751 scope.go:117] "RemoveContainer" containerID="e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.851833 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06"} err="failed to get container status \"e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\": rpc error: code = NotFound desc = could not find container \"e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\": container with ID starting with e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.851863 4751 scope.go:117] "RemoveContainer" containerID="cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.852117 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2"} err="failed to get container status \"cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\": rpc error: code = NotFound desc = could not find container \"cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\": container with ID starting with cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.852147 4751 scope.go:117] "RemoveContainer" containerID="4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.852391 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea"} err="failed to get container status \"4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\": rpc error: code = NotFound desc = could not find container \"4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\": container with ID starting with 4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.905702 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: W0130 21:26:22.922150 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode06def3c_fba4_4d8b_a757_8a9691b6e8d5.slice/crio-800fad32300ea017786587d7cde335216ff0418cedb424cb73e635531eec011a WatchSource:0}: Error finding container 800fad32300ea017786587d7cde335216ff0418cedb424cb73e635531eec011a: Status 404 returned error can't find the container with id 800fad32300ea017786587d7cde335216ff0418cedb424cb73e635531eec011a Jan 30 21:26:23 crc kubenswrapper[4751]: I0130 21:26:23.004502 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n8bjd"] Jan 30 21:26:23 crc kubenswrapper[4751]: I0130 21:26:23.008475 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n8bjd"] Jan 30 21:26:23 crc kubenswrapper[4751]: I0130 21:26:23.653813 4751 generic.go:334] "Generic (PLEG): container finished" podID="e06def3c-fba4-4d8b-a757-8a9691b6e8d5" containerID="fbf27c0c77816fe260af9371a94b5decc79e2dcd9a7ea97c73a5e1b08c2aa7a7" exitCode=0 Jan 30 21:26:23 crc kubenswrapper[4751]: I0130 21:26:23.653876 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" event={"ID":"e06def3c-fba4-4d8b-a757-8a9691b6e8d5","Type":"ContainerDied","Data":"fbf27c0c77816fe260af9371a94b5decc79e2dcd9a7ea97c73a5e1b08c2aa7a7"} Jan 30 21:26:23 crc kubenswrapper[4751]: I0130 21:26:23.653903 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" event={"ID":"e06def3c-fba4-4d8b-a757-8a9691b6e8d5","Type":"ContainerStarted","Data":"800fad32300ea017786587d7cde335216ff0418cedb424cb73e635531eec011a"} Jan 30 21:26:23 crc kubenswrapper[4751]: I0130 21:26:23.986176 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" path="/var/lib/kubelet/pods/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/volumes" Jan 30 21:26:24 crc kubenswrapper[4751]: I0130 21:26:24.126930 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:26:24 crc kubenswrapper[4751]: I0130 21:26:24.127076 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:26:24 crc kubenswrapper[4751]: I0130 21:26:24.665384 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" event={"ID":"e06def3c-fba4-4d8b-a757-8a9691b6e8d5","Type":"ContainerStarted","Data":"c5bba681a0c5328060626237e12fc4a149d5fc336b368a86d5cd968ff56ea43b"} Jan 30 21:26:24 crc kubenswrapper[4751]: I0130 21:26:24.666356 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" event={"ID":"e06def3c-fba4-4d8b-a757-8a9691b6e8d5","Type":"ContainerStarted","Data":"6c9be4a6ce0c4681a390ddaaa2607239153351db36333368785f77d80128b789"} Jan 30 21:26:24 crc kubenswrapper[4751]: I0130 21:26:24.666455 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" event={"ID":"e06def3c-fba4-4d8b-a757-8a9691b6e8d5","Type":"ContainerStarted","Data":"695f321e07d13447854cd2795e0d52cc08a8014783da88bbb24e5408fb2cb5c9"} Jan 30 21:26:24 crc kubenswrapper[4751]: I0130 21:26:24.666532 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" event={"ID":"e06def3c-fba4-4d8b-a757-8a9691b6e8d5","Type":"ContainerStarted","Data":"814ce438a366d7261ae72260069f92dd6353fbcb1a6fa883000560cdb532e0c4"} Jan 30 21:26:24 crc kubenswrapper[4751]: I0130 21:26:24.666716 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" event={"ID":"e06def3c-fba4-4d8b-a757-8a9691b6e8d5","Type":"ContainerStarted","Data":"9d280fefc584670ba10b9e2a91ee82c0f4ffd9b03a6fbd4c9b8563ef418df4ca"} Jan 30 21:26:24 crc kubenswrapper[4751]: I0130 21:26:24.666799 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" event={"ID":"e06def3c-fba4-4d8b-a757-8a9691b6e8d5","Type":"ContainerStarted","Data":"ad63af0b5d286a7cf0519491f737673ef96d6554f2776f21f05f6b309d7892da"} Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.380055 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-l498d" Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.786636 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg"] Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.788236 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.790189 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.790433 4751 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-n5r22" Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.790631 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.816047 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-mbzjn"] Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.817167 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.819864 4751 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-cvfjx" Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.824211 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-shbmk"] Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.826403 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.830750 4751 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-hnk4n" Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.906451 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vhcm\" (UniqueName: \"kubernetes.io/projected/9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd-kube-api-access-7vhcm\") pod \"cert-manager-858654f9db-mbzjn\" (UID: \"9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd\") " pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.906564 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7zl5\" (UniqueName: \"kubernetes.io/projected/9acdc588-bef3-4ce2-bf06-afea86273408-kube-api-access-j7zl5\") pod \"cert-manager-webhook-687f57d79b-shbmk\" (UID: \"9acdc588-bef3-4ce2-bf06-afea86273408\") " pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.906606 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gl2f\" (UniqueName: \"kubernetes.io/projected/04bdab63-06c1-475f-8351-a2ccc4292f25-kube-api-access-8gl2f\") pod \"cert-manager-cainjector-cf98fcc89-9k9rg\" (UID: \"04bdab63-06c1-475f-8351-a2ccc4292f25\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:26 crc kubenswrapper[4751]: I0130 21:26:26.007942 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zl5\" (UniqueName: \"kubernetes.io/projected/9acdc588-bef3-4ce2-bf06-afea86273408-kube-api-access-j7zl5\") pod \"cert-manager-webhook-687f57d79b-shbmk\" (UID: \"9acdc588-bef3-4ce2-bf06-afea86273408\") " pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:26 crc kubenswrapper[4751]: I0130 21:26:26.008462 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gl2f\" (UniqueName: \"kubernetes.io/projected/04bdab63-06c1-475f-8351-a2ccc4292f25-kube-api-access-8gl2f\") pod \"cert-manager-cainjector-cf98fcc89-9k9rg\" (UID: \"04bdab63-06c1-475f-8351-a2ccc4292f25\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:26 crc kubenswrapper[4751]: I0130 21:26:26.008709 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vhcm\" (UniqueName: \"kubernetes.io/projected/9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd-kube-api-access-7vhcm\") pod \"cert-manager-858654f9db-mbzjn\" (UID: \"9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd\") " pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:26 crc kubenswrapper[4751]: I0130 21:26:26.027051 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vhcm\" (UniqueName: \"kubernetes.io/projected/9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd-kube-api-access-7vhcm\") pod \"cert-manager-858654f9db-mbzjn\" (UID: \"9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd\") " pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:26 crc kubenswrapper[4751]: I0130 21:26:26.029903 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7zl5\" (UniqueName: \"kubernetes.io/projected/9acdc588-bef3-4ce2-bf06-afea86273408-kube-api-access-j7zl5\") pod \"cert-manager-webhook-687f57d79b-shbmk\" (UID: \"9acdc588-bef3-4ce2-bf06-afea86273408\") " pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:26 crc kubenswrapper[4751]: I0130 21:26:26.040806 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gl2f\" (UniqueName: \"kubernetes.io/projected/04bdab63-06c1-475f-8351-a2ccc4292f25-kube-api-access-8gl2f\") pod \"cert-manager-cainjector-cf98fcc89-9k9rg\" (UID: \"04bdab63-06c1-475f-8351-a2ccc4292f25\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:26 crc kubenswrapper[4751]: I0130 21:26:26.107776 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:26 crc kubenswrapper[4751]: E0130 21:26:26.133052 4751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager_04bdab63-06c1-475f-8351-a2ccc4292f25_0(53b5db70e375ba90c1dbd19c750df31695be1a20dfe126c79312540f12b77ba1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:26:26 crc kubenswrapper[4751]: E0130 21:26:26.133237 4751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager_04bdab63-06c1-475f-8351-a2ccc4292f25_0(53b5db70e375ba90c1dbd19c750df31695be1a20dfe126c79312540f12b77ba1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:26 crc kubenswrapper[4751]: E0130 21:26:26.133280 4751 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager_04bdab63-06c1-475f-8351-a2ccc4292f25_0(53b5db70e375ba90c1dbd19c750df31695be1a20dfe126c79312540f12b77ba1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:26 crc kubenswrapper[4751]: E0130 21:26:26.133378 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager(04bdab63-06c1-475f-8351-a2ccc4292f25)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager(04bdab63-06c1-475f-8351-a2ccc4292f25)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager_04bdab63-06c1-475f-8351-a2ccc4292f25_0(53b5db70e375ba90c1dbd19c750df31695be1a20dfe126c79312540f12b77ba1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" podUID="04bdab63-06c1-475f-8351-a2ccc4292f25" Jan 30 21:26:26 crc kubenswrapper[4751]: I0130 21:26:26.141387 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:26 crc kubenswrapper[4751]: I0130 21:26:26.149819 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:26 crc kubenswrapper[4751]: E0130 21:26:26.171947 4751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mbzjn_cert-manager_9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd_0(c49424396545e535821a7f23ae860b318875e546e4c95d615e56330e31cfd7e4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:26:26 crc kubenswrapper[4751]: E0130 21:26:26.172110 4751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mbzjn_cert-manager_9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd_0(c49424396545e535821a7f23ae860b318875e546e4c95d615e56330e31cfd7e4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:26 crc kubenswrapper[4751]: E0130 21:26:26.172236 4751 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mbzjn_cert-manager_9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd_0(c49424396545e535821a7f23ae860b318875e546e4c95d615e56330e31cfd7e4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:26 crc kubenswrapper[4751]: E0130 21:26:26.172556 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-858654f9db-mbzjn_cert-manager(9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-858654f9db-mbzjn_cert-manager(9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mbzjn_cert-manager_9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd_0(c49424396545e535821a7f23ae860b318875e546e4c95d615e56330e31cfd7e4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-858654f9db-mbzjn" podUID="9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd" Jan 30 21:26:26 crc kubenswrapper[4751]: E0130 21:26:26.199162 4751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-shbmk_cert-manager_9acdc588-bef3-4ce2-bf06-afea86273408_0(0edc5d01a8de309214f70280fc660dbeddf385496c4b3bc832bb4edd033bf940): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:26:26 crc kubenswrapper[4751]: E0130 21:26:26.199238 4751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-shbmk_cert-manager_9acdc588-bef3-4ce2-bf06-afea86273408_0(0edc5d01a8de309214f70280fc660dbeddf385496c4b3bc832bb4edd033bf940): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:26 crc kubenswrapper[4751]: E0130 21:26:26.199260 4751 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-shbmk_cert-manager_9acdc588-bef3-4ce2-bf06-afea86273408_0(0edc5d01a8de309214f70280fc660dbeddf385496c4b3bc832bb4edd033bf940): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:26 crc kubenswrapper[4751]: E0130 21:26:26.199306 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-687f57d79b-shbmk_cert-manager(9acdc588-bef3-4ce2-bf06-afea86273408)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-687f57d79b-shbmk_cert-manager(9acdc588-bef3-4ce2-bf06-afea86273408)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-shbmk_cert-manager_9acdc588-bef3-4ce2-bf06-afea86273408_0(0edc5d01a8de309214f70280fc660dbeddf385496c4b3bc832bb4edd033bf940): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" podUID="9acdc588-bef3-4ce2-bf06-afea86273408" Jan 30 21:26:26 crc kubenswrapper[4751]: I0130 21:26:26.680153 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" event={"ID":"e06def3c-fba4-4d8b-a757-8a9691b6e8d5","Type":"ContainerStarted","Data":"1a38d38bed78d6417058a678a7fe778414ff9bd58065edd5c13c466806c12180"} Jan 30 21:26:29 crc kubenswrapper[4751]: I0130 21:26:29.699342 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" event={"ID":"e06def3c-fba4-4d8b-a757-8a9691b6e8d5","Type":"ContainerStarted","Data":"33a6a45d7f4c3c534bfe517ee30a85d20b4ad905296fd2b0319391ebcdbb97d6"} Jan 30 21:26:29 crc kubenswrapper[4751]: I0130 21:26:29.699826 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:29 crc kubenswrapper[4751]: I0130 21:26:29.699839 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:29 crc kubenswrapper[4751]: I0130 21:26:29.699849 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:29 crc kubenswrapper[4751]: I0130 21:26:29.739476 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" podStartSLOduration=7.739459482 podStartE2EDuration="7.739459482s" podCreationTimestamp="2026-01-30 21:26:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:26:29.738426135 +0000 UTC m=+728.484248784" watchObservedRunningTime="2026-01-30 21:26:29.739459482 +0000 UTC m=+728.485282131" Jan 30 21:26:29 crc kubenswrapper[4751]: I0130 21:26:29.743839 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:29 crc kubenswrapper[4751]: I0130 21:26:29.745504 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:30 crc kubenswrapper[4751]: I0130 21:26:30.133998 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-shbmk"] Jan 30 21:26:30 crc kubenswrapper[4751]: I0130 21:26:30.134111 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:30 crc kubenswrapper[4751]: I0130 21:26:30.134617 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:30 crc kubenswrapper[4751]: I0130 21:26:30.141383 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-mbzjn"] Jan 30 21:26:30 crc kubenswrapper[4751]: I0130 21:26:30.141518 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:30 crc kubenswrapper[4751]: I0130 21:26:30.141891 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:30 crc kubenswrapper[4751]: E0130 21:26:30.179958 4751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-shbmk_cert-manager_9acdc588-bef3-4ce2-bf06-afea86273408_0(e17f23062a2b0bad7a23e53b2cd47e77076be6d95d66d82ff339abbb98a8dd2a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:26:30 crc kubenswrapper[4751]: E0130 21:26:30.180015 4751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-shbmk_cert-manager_9acdc588-bef3-4ce2-bf06-afea86273408_0(e17f23062a2b0bad7a23e53b2cd47e77076be6d95d66d82ff339abbb98a8dd2a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:30 crc kubenswrapper[4751]: E0130 21:26:30.180037 4751 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-shbmk_cert-manager_9acdc588-bef3-4ce2-bf06-afea86273408_0(e17f23062a2b0bad7a23e53b2cd47e77076be6d95d66d82ff339abbb98a8dd2a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:30 crc kubenswrapper[4751]: E0130 21:26:30.180079 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-687f57d79b-shbmk_cert-manager(9acdc588-bef3-4ce2-bf06-afea86273408)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-687f57d79b-shbmk_cert-manager(9acdc588-bef3-4ce2-bf06-afea86273408)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-shbmk_cert-manager_9acdc588-bef3-4ce2-bf06-afea86273408_0(e17f23062a2b0bad7a23e53b2cd47e77076be6d95d66d82ff339abbb98a8dd2a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" podUID="9acdc588-bef3-4ce2-bf06-afea86273408" Jan 30 21:26:30 crc kubenswrapper[4751]: I0130 21:26:30.193137 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg"] Jan 30 21:26:30 crc kubenswrapper[4751]: I0130 21:26:30.193251 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:30 crc kubenswrapper[4751]: I0130 21:26:30.193656 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:30 crc kubenswrapper[4751]: E0130 21:26:30.215676 4751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mbzjn_cert-manager_9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd_0(b363fe4a778d9ccbb0a5182bdeaffa84dd8adbd7ab2df81e38dde50d5681a56a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:26:30 crc kubenswrapper[4751]: E0130 21:26:30.215733 4751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mbzjn_cert-manager_9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd_0(b363fe4a778d9ccbb0a5182bdeaffa84dd8adbd7ab2df81e38dde50d5681a56a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:30 crc kubenswrapper[4751]: E0130 21:26:30.215757 4751 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mbzjn_cert-manager_9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd_0(b363fe4a778d9ccbb0a5182bdeaffa84dd8adbd7ab2df81e38dde50d5681a56a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:30 crc kubenswrapper[4751]: E0130 21:26:30.215794 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-858654f9db-mbzjn_cert-manager(9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-858654f9db-mbzjn_cert-manager(9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mbzjn_cert-manager_9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd_0(b363fe4a778d9ccbb0a5182bdeaffa84dd8adbd7ab2df81e38dde50d5681a56a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-858654f9db-mbzjn" podUID="9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd" Jan 30 21:26:30 crc kubenswrapper[4751]: E0130 21:26:30.227114 4751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager_04bdab63-06c1-475f-8351-a2ccc4292f25_0(01744bd11caded8a975cd25d2509156a2d1bbd000a563a6455a6d57dcf6cdb39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:26:30 crc kubenswrapper[4751]: E0130 21:26:30.227164 4751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager_04bdab63-06c1-475f-8351-a2ccc4292f25_0(01744bd11caded8a975cd25d2509156a2d1bbd000a563a6455a6d57dcf6cdb39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:30 crc kubenswrapper[4751]: E0130 21:26:30.227183 4751 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager_04bdab63-06c1-475f-8351-a2ccc4292f25_0(01744bd11caded8a975cd25d2509156a2d1bbd000a563a6455a6d57dcf6cdb39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:30 crc kubenswrapper[4751]: E0130 21:26:30.227216 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager(04bdab63-06c1-475f-8351-a2ccc4292f25)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager(04bdab63-06c1-475f-8351-a2ccc4292f25)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager_04bdab63-06c1-475f-8351-a2ccc4292f25_0(01744bd11caded8a975cd25d2509156a2d1bbd000a563a6455a6d57dcf6cdb39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" podUID="04bdab63-06c1-475f-8351-a2ccc4292f25" Jan 30 21:26:35 crc kubenswrapper[4751]: I0130 21:26:35.977173 4751 scope.go:117] "RemoveContainer" containerID="83b2f589d316b2b21ef50ee0174ac43309d977d8244dba740216ca2dd67db344" Jan 30 21:26:35 crc kubenswrapper[4751]: E0130 21:26:35.977894 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-5sgk2_openshift-multus(bcecdc4b-6607-4e4e-a9b5-49b85c030d21)\"" pod="openshift-multus/multus-5sgk2" podUID="bcecdc4b-6607-4e4e-a9b5-49b85c030d21" Jan 30 21:26:42 crc kubenswrapper[4751]: I0130 21:26:42.975653 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:42 crc kubenswrapper[4751]: I0130 21:26:42.977010 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:43 crc kubenswrapper[4751]: E0130 21:26:43.025254 4751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager_04bdab63-06c1-475f-8351-a2ccc4292f25_0(9b70e517ec2a2f9c93da474f981befc1b536efa81857bbc2951ad86d35642787): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:26:43 crc kubenswrapper[4751]: E0130 21:26:43.025390 4751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager_04bdab63-06c1-475f-8351-a2ccc4292f25_0(9b70e517ec2a2f9c93da474f981befc1b536efa81857bbc2951ad86d35642787): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:43 crc kubenswrapper[4751]: E0130 21:26:43.025438 4751 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager_04bdab63-06c1-475f-8351-a2ccc4292f25_0(9b70e517ec2a2f9c93da474f981befc1b536efa81857bbc2951ad86d35642787): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:43 crc kubenswrapper[4751]: E0130 21:26:43.025526 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager(04bdab63-06c1-475f-8351-a2ccc4292f25)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager(04bdab63-06c1-475f-8351-a2ccc4292f25)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager_04bdab63-06c1-475f-8351-a2ccc4292f25_0(9b70e517ec2a2f9c93da474f981befc1b536efa81857bbc2951ad86d35642787): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" podUID="04bdab63-06c1-475f-8351-a2ccc4292f25" Jan 30 21:26:44 crc kubenswrapper[4751]: I0130 21:26:44.976080 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:44 crc kubenswrapper[4751]: I0130 21:26:44.977485 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:45 crc kubenswrapper[4751]: E0130 21:26:45.031422 4751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mbzjn_cert-manager_9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd_0(b1e89963d3fb6138e941477ad0a264ecb1d59e35d629a712325c0c46c14b8089): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:26:45 crc kubenswrapper[4751]: E0130 21:26:45.031501 4751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mbzjn_cert-manager_9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd_0(b1e89963d3fb6138e941477ad0a264ecb1d59e35d629a712325c0c46c14b8089): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:45 crc kubenswrapper[4751]: E0130 21:26:45.031531 4751 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mbzjn_cert-manager_9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd_0(b1e89963d3fb6138e941477ad0a264ecb1d59e35d629a712325c0c46c14b8089): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:45 crc kubenswrapper[4751]: E0130 21:26:45.031596 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-858654f9db-mbzjn_cert-manager(9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-858654f9db-mbzjn_cert-manager(9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mbzjn_cert-manager_9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd_0(b1e89963d3fb6138e941477ad0a264ecb1d59e35d629a712325c0c46c14b8089): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-858654f9db-mbzjn" podUID="9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd" Jan 30 21:26:45 crc kubenswrapper[4751]: I0130 21:26:45.975191 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:45 crc kubenswrapper[4751]: I0130 21:26:45.976922 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:46 crc kubenswrapper[4751]: E0130 21:26:46.004156 4751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-shbmk_cert-manager_9acdc588-bef3-4ce2-bf06-afea86273408_0(ec1aa5dab71b39de1131e6bf70b88d1aad3b1f626549e465574d1890e6fda8cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:26:46 crc kubenswrapper[4751]: E0130 21:26:46.004241 4751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-shbmk_cert-manager_9acdc588-bef3-4ce2-bf06-afea86273408_0(ec1aa5dab71b39de1131e6bf70b88d1aad3b1f626549e465574d1890e6fda8cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:46 crc kubenswrapper[4751]: E0130 21:26:46.004269 4751 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-shbmk_cert-manager_9acdc588-bef3-4ce2-bf06-afea86273408_0(ec1aa5dab71b39de1131e6bf70b88d1aad3b1f626549e465574d1890e6fda8cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:46 crc kubenswrapper[4751]: E0130 21:26:46.004352 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-687f57d79b-shbmk_cert-manager(9acdc588-bef3-4ce2-bf06-afea86273408)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-687f57d79b-shbmk_cert-manager(9acdc588-bef3-4ce2-bf06-afea86273408)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-shbmk_cert-manager_9acdc588-bef3-4ce2-bf06-afea86273408_0(ec1aa5dab71b39de1131e6bf70b88d1aad3b1f626549e465574d1890e6fda8cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" podUID="9acdc588-bef3-4ce2-bf06-afea86273408" Jan 30 21:26:49 crc kubenswrapper[4751]: I0130 21:26:49.976666 4751 scope.go:117] "RemoveContainer" containerID="83b2f589d316b2b21ef50ee0174ac43309d977d8244dba740216ca2dd67db344" Jan 30 21:26:50 crc kubenswrapper[4751]: I0130 21:26:50.885811 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5sgk2_bcecdc4b-6607-4e4e-a9b5-49b85c030d21/kube-multus/2.log" Jan 30 21:26:50 crc kubenswrapper[4751]: I0130 21:26:50.886227 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5sgk2" event={"ID":"bcecdc4b-6607-4e4e-a9b5-49b85c030d21","Type":"ContainerStarted","Data":"80504ec7cb514f9b12d6512d6b92672e51e1f2ac85e30724b02b38f23ef119fc"} Jan 30 21:26:52 crc kubenswrapper[4751]: I0130 21:26:52.945127 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:54 crc kubenswrapper[4751]: I0130 21:26:54.127640 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:26:54 crc kubenswrapper[4751]: I0130 21:26:54.127734 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:26:54 crc kubenswrapper[4751]: I0130 21:26:54.127800 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:26:54 crc kubenswrapper[4751]: I0130 21:26:54.128842 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a610754d75a118a60637a1e554575fc5a5a243d54c20205f4fedf2c00e804266"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:26:54 crc kubenswrapper[4751]: I0130 21:26:54.128950 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://a610754d75a118a60637a1e554575fc5a5a243d54c20205f4fedf2c00e804266" gracePeriod=600 Jan 30 21:26:54 crc kubenswrapper[4751]: I0130 21:26:54.921731 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="a610754d75a118a60637a1e554575fc5a5a243d54c20205f4fedf2c00e804266" exitCode=0 Jan 30 21:26:54 crc kubenswrapper[4751]: I0130 21:26:54.921811 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"a610754d75a118a60637a1e554575fc5a5a243d54c20205f4fedf2c00e804266"} Jan 30 21:26:54 crc kubenswrapper[4751]: I0130 21:26:54.922603 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"ad350159473538b7294a1cb17b3c91bed6ccae12ecd005a2dc1c208ac650225b"} Jan 30 21:26:54 crc kubenswrapper[4751]: I0130 21:26:54.922649 4751 scope.go:117] "RemoveContainer" containerID="a1b797d24a7a7f0cfe28e0e7b1326aa242a6fa28ef5d30064b33f02362b2f1a6" Jan 30 21:26:54 crc kubenswrapper[4751]: I0130 21:26:54.974939 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:54 crc kubenswrapper[4751]: I0130 21:26:54.975464 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:55 crc kubenswrapper[4751]: I0130 21:26:55.186539 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg"] Jan 30 21:26:55 crc kubenswrapper[4751]: I0130 21:26:55.938715 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" event={"ID":"04bdab63-06c1-475f-8351-a2ccc4292f25","Type":"ContainerStarted","Data":"5c7fa1cebd0b137f6751c7c86400d9c4f1245c22e347323173cd02eadad09b36"} Jan 30 21:26:57 crc kubenswrapper[4751]: I0130 21:26:57.975498 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:57 crc kubenswrapper[4751]: I0130 21:26:57.976890 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:59 crc kubenswrapper[4751]: I0130 21:26:59.530693 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-mbzjn"] Jan 30 21:26:59 crc kubenswrapper[4751]: I0130 21:26:59.985629 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" event={"ID":"04bdab63-06c1-475f-8351-a2ccc4292f25","Type":"ContainerStarted","Data":"6bed7bff594206b7e4db2756f55a1b17b2aaa993f1053a907853e796a24db6ab"} Jan 30 21:26:59 crc kubenswrapper[4751]: I0130 21:26:59.985691 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-mbzjn" event={"ID":"9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd","Type":"ContainerStarted","Data":"c2e9d2ca4e9b4bb1fc7f0e0016fa27b5c8f80c3b6842d7d4adf02f13ff4eaa58"} Jan 30 21:27:00 crc kubenswrapper[4751]: I0130 21:27:00.004637 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" podStartSLOduration=30.724687067 podStartE2EDuration="35.004616104s" podCreationTimestamp="2026-01-30 21:26:25 +0000 UTC" firstStartedPulling="2026-01-30 21:26:55.195842936 +0000 UTC m=+753.941665585" lastFinishedPulling="2026-01-30 21:26:59.475771973 +0000 UTC m=+758.221594622" observedRunningTime="2026-01-30 21:27:00.003078585 +0000 UTC m=+758.748901264" watchObservedRunningTime="2026-01-30 21:27:00.004616104 +0000 UTC m=+758.750438753" Jan 30 21:27:00 crc kubenswrapper[4751]: I0130 21:27:00.975138 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:27:00 crc kubenswrapper[4751]: I0130 21:27:00.976083 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:27:02 crc kubenswrapper[4751]: I0130 21:27:02.155795 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-shbmk"] Jan 30 21:27:04 crc kubenswrapper[4751]: I0130 21:27:04.017928 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-mbzjn" event={"ID":"9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd","Type":"ContainerStarted","Data":"c33e1277dcb8bda6ae26f572105920ba9cadb13a6b4ea30cd630c2dd58f0660e"} Jan 30 21:27:04 crc kubenswrapper[4751]: I0130 21:27:04.021907 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" event={"ID":"9acdc588-bef3-4ce2-bf06-afea86273408","Type":"ContainerStarted","Data":"9ab313adebe7e8122c4ff6b9b65ef4789b9ff7f0d4e13f0f915ab23f26a63b82"} Jan 30 21:27:04 crc kubenswrapper[4751]: I0130 21:27:04.047055 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-mbzjn" podStartSLOduration=35.489541852 podStartE2EDuration="39.046998698s" podCreationTimestamp="2026-01-30 21:26:25 +0000 UTC" firstStartedPulling="2026-01-30 21:26:59.539808518 +0000 UTC m=+758.285631177" lastFinishedPulling="2026-01-30 21:27:03.097265374 +0000 UTC m=+761.843088023" observedRunningTime="2026-01-30 21:27:04.036464879 +0000 UTC m=+762.782287528" watchObservedRunningTime="2026-01-30 21:27:04.046998698 +0000 UTC m=+762.792821357" Jan 30 21:27:05 crc kubenswrapper[4751]: I0130 21:27:05.036642 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" event={"ID":"9acdc588-bef3-4ce2-bf06-afea86273408","Type":"ContainerStarted","Data":"d5025a00c70a46078b1ad4d454005b1328143ea48bdeaf40fa775eeaf2c0ab3e"} Jan 30 21:27:05 crc kubenswrapper[4751]: I0130 21:27:05.069240 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" podStartSLOduration=38.541172156 podStartE2EDuration="40.069206053s" podCreationTimestamp="2026-01-30 21:26:25 +0000 UTC" firstStartedPulling="2026-01-30 21:27:03.081632485 +0000 UTC m=+761.827455164" lastFinishedPulling="2026-01-30 21:27:04.609666412 +0000 UTC m=+763.355489061" observedRunningTime="2026-01-30 21:27:05.057647499 +0000 UTC m=+763.803470188" watchObservedRunningTime="2026-01-30 21:27:05.069206053 +0000 UTC m=+763.815028742" Jan 30 21:27:05 crc kubenswrapper[4751]: I0130 21:27:05.606546 4751 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 21:27:06 crc kubenswrapper[4751]: I0130 21:27:06.044300 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:27:11 crc kubenswrapper[4751]: I0130 21:27:11.154281 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.301434 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd"] Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.304467 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.307680 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.332178 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd"] Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.403731 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd\" (UID: \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.404098 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmbpg\" (UniqueName: \"kubernetes.io/projected/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-kube-api-access-jmbpg\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd\" (UID: \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.404216 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd\" (UID: \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.481062 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg"] Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.482982 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.491813 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg"] Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.505613 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd\" (UID: \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.505719 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmbpg\" (UniqueName: \"kubernetes.io/projected/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-kube-api-access-jmbpg\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd\" (UID: \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.505763 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd\" (UID: \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.506205 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd\" (UID: \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.506256 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd\" (UID: \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.548428 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmbpg\" (UniqueName: \"kubernetes.io/projected/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-kube-api-access-jmbpg\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd\" (UID: \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.607499 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cbca202-59f0-4772-a82c-8c448cbc4c70-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg\" (UID: \"1cbca202-59f0-4772-a82c-8c448cbc4c70\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.607895 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cbca202-59f0-4772-a82c-8c448cbc4c70-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg\" (UID: \"1cbca202-59f0-4772-a82c-8c448cbc4c70\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.607988 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgl6g\" (UniqueName: \"kubernetes.io/projected/1cbca202-59f0-4772-a82c-8c448cbc4c70-kube-api-access-kgl6g\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg\" (UID: \"1cbca202-59f0-4772-a82c-8c448cbc4c70\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.681287 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.709297 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgl6g\" (UniqueName: \"kubernetes.io/projected/1cbca202-59f0-4772-a82c-8c448cbc4c70-kube-api-access-kgl6g\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg\" (UID: \"1cbca202-59f0-4772-a82c-8c448cbc4c70\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.709414 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cbca202-59f0-4772-a82c-8c448cbc4c70-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg\" (UID: \"1cbca202-59f0-4772-a82c-8c448cbc4c70\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.709444 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cbca202-59f0-4772-a82c-8c448cbc4c70-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg\" (UID: \"1cbca202-59f0-4772-a82c-8c448cbc4c70\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.709943 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cbca202-59f0-4772-a82c-8c448cbc4c70-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg\" (UID: \"1cbca202-59f0-4772-a82c-8c448cbc4c70\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.709951 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cbca202-59f0-4772-a82c-8c448cbc4c70-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg\" (UID: \"1cbca202-59f0-4772-a82c-8c448cbc4c70\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.727970 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgl6g\" (UniqueName: \"kubernetes.io/projected/1cbca202-59f0-4772-a82c-8c448cbc4c70-kube-api-access-kgl6g\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg\" (UID: \"1cbca202-59f0-4772-a82c-8c448cbc4c70\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.799978 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" Jan 30 21:27:38 crc kubenswrapper[4751]: I0130 21:27:38.097704 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd"] Jan 30 21:27:38 crc kubenswrapper[4751]: I0130 21:27:38.236837 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg"] Jan 30 21:27:38 crc kubenswrapper[4751]: W0130 21:27:38.237718 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cbca202_59f0_4772_a82c_8c448cbc4c70.slice/crio-00f1560fa0698e5c3dfd7ffc5943ed55e1b54b817660be0d0ca6c1e47446f938 WatchSource:0}: Error finding container 00f1560fa0698e5c3dfd7ffc5943ed55e1b54b817660be0d0ca6c1e47446f938: Status 404 returned error can't find the container with id 00f1560fa0698e5c3dfd7ffc5943ed55e1b54b817660be0d0ca6c1e47446f938 Jan 30 21:27:38 crc kubenswrapper[4751]: I0130 21:27:38.391841 4751 generic.go:334] "Generic (PLEG): container finished" podID="de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" containerID="347f9ed747e483e16fb6ae1c645ea8f9e1e241d75612df7496d92124e040f3b2" exitCode=0 Jan 30 21:27:38 crc kubenswrapper[4751]: I0130 21:27:38.392335 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" event={"ID":"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e","Type":"ContainerDied","Data":"347f9ed747e483e16fb6ae1c645ea8f9e1e241d75612df7496d92124e040f3b2"} Jan 30 21:27:38 crc kubenswrapper[4751]: I0130 21:27:38.392378 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" event={"ID":"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e","Type":"ContainerStarted","Data":"afce932094578d4622ffbd64cb7aa71b797cb261dfa58b636da404a5ddeda537"} Jan 30 21:27:38 crc kubenswrapper[4751]: I0130 21:27:38.393201 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" event={"ID":"1cbca202-59f0-4772-a82c-8c448cbc4c70","Type":"ContainerStarted","Data":"00f1560fa0698e5c3dfd7ffc5943ed55e1b54b817660be0d0ca6c1e47446f938"} Jan 30 21:27:39 crc kubenswrapper[4751]: I0130 21:27:39.405728 4751 generic.go:334] "Generic (PLEG): container finished" podID="1cbca202-59f0-4772-a82c-8c448cbc4c70" containerID="abe94b742d94eb174247c27e3a3c038f4045e5dcdb784f1f247f494e3ae1f48a" exitCode=0 Jan 30 21:27:39 crc kubenswrapper[4751]: I0130 21:27:39.405785 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" event={"ID":"1cbca202-59f0-4772-a82c-8c448cbc4c70","Type":"ContainerDied","Data":"abe94b742d94eb174247c27e3a3c038f4045e5dcdb784f1f247f494e3ae1f48a"} Jan 30 21:27:40 crc kubenswrapper[4751]: I0130 21:27:40.418499 4751 generic.go:334] "Generic (PLEG): container finished" podID="de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" containerID="00077efd881cb27326f6e85b8f3f194fe2c51b7a53178340a6cd81dc7d4c6583" exitCode=0 Jan 30 21:27:40 crc kubenswrapper[4751]: I0130 21:27:40.418692 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" event={"ID":"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e","Type":"ContainerDied","Data":"00077efd881cb27326f6e85b8f3f194fe2c51b7a53178340a6cd81dc7d4c6583"} Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.041691 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cwl6v"] Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.044068 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.065468 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cwl6v"] Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.100692 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd9ts\" (UniqueName: \"kubernetes.io/projected/aed619dc-ef21-4b05-ad8b-1fe65d151661-kube-api-access-qd9ts\") pod \"redhat-operators-cwl6v\" (UID: \"aed619dc-ef21-4b05-ad8b-1fe65d151661\") " pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.100770 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aed619dc-ef21-4b05-ad8b-1fe65d151661-utilities\") pod \"redhat-operators-cwl6v\" (UID: \"aed619dc-ef21-4b05-ad8b-1fe65d151661\") " pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.100816 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aed619dc-ef21-4b05-ad8b-1fe65d151661-catalog-content\") pod \"redhat-operators-cwl6v\" (UID: \"aed619dc-ef21-4b05-ad8b-1fe65d151661\") " pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.201973 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aed619dc-ef21-4b05-ad8b-1fe65d151661-catalog-content\") pod \"redhat-operators-cwl6v\" (UID: \"aed619dc-ef21-4b05-ad8b-1fe65d151661\") " pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.202143 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd9ts\" (UniqueName: \"kubernetes.io/projected/aed619dc-ef21-4b05-ad8b-1fe65d151661-kube-api-access-qd9ts\") pod \"redhat-operators-cwl6v\" (UID: \"aed619dc-ef21-4b05-ad8b-1fe65d151661\") " pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.202220 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aed619dc-ef21-4b05-ad8b-1fe65d151661-utilities\") pod \"redhat-operators-cwl6v\" (UID: \"aed619dc-ef21-4b05-ad8b-1fe65d151661\") " pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.202697 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aed619dc-ef21-4b05-ad8b-1fe65d151661-catalog-content\") pod \"redhat-operators-cwl6v\" (UID: \"aed619dc-ef21-4b05-ad8b-1fe65d151661\") " pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.202768 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aed619dc-ef21-4b05-ad8b-1fe65d151661-utilities\") pod \"redhat-operators-cwl6v\" (UID: \"aed619dc-ef21-4b05-ad8b-1fe65d151661\") " pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.235254 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd9ts\" (UniqueName: \"kubernetes.io/projected/aed619dc-ef21-4b05-ad8b-1fe65d151661-kube-api-access-qd9ts\") pod \"redhat-operators-cwl6v\" (UID: \"aed619dc-ef21-4b05-ad8b-1fe65d151661\") " pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.397642 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.429316 4751 generic.go:334] "Generic (PLEG): container finished" podID="de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" containerID="a1a33ec969e1c6d383f8048ab15fcd257712831d15c58fa7001702dde20fdac5" exitCode=0 Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.429392 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" event={"ID":"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e","Type":"ContainerDied","Data":"a1a33ec969e1c6d383f8048ab15fcd257712831d15c58fa7001702dde20fdac5"} Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.833826 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cwl6v"] Jan 30 21:27:41 crc kubenswrapper[4751]: W0130 21:27:41.840271 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaed619dc_ef21_4b05_ad8b_1fe65d151661.slice/crio-ad819a6dc3fa066f060d584767d4a31a45ddc97ba5842a4292bc644253801abc WatchSource:0}: Error finding container ad819a6dc3fa066f060d584767d4a31a45ddc97ba5842a4292bc644253801abc: Status 404 returned error can't find the container with id ad819a6dc3fa066f060d584767d4a31a45ddc97ba5842a4292bc644253801abc Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.438309 4751 generic.go:334] "Generic (PLEG): container finished" podID="1cbca202-59f0-4772-a82c-8c448cbc4c70" containerID="4bac6aed72495d5a47025b1229e37fd0256684ee83fbbdb6b3d50f1e0a5fc0c5" exitCode=0 Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.438404 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" event={"ID":"1cbca202-59f0-4772-a82c-8c448cbc4c70","Type":"ContainerDied","Data":"4bac6aed72495d5a47025b1229e37fd0256684ee83fbbdb6b3d50f1e0a5fc0c5"} Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.440295 4751 generic.go:334] "Generic (PLEG): container finished" podID="aed619dc-ef21-4b05-ad8b-1fe65d151661" containerID="ffd1a65f8a7c27c7f8621cd0bbe5505acca22aece232cb5556eb72c3d0444078" exitCode=0 Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.440348 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwl6v" event={"ID":"aed619dc-ef21-4b05-ad8b-1fe65d151661","Type":"ContainerDied","Data":"ffd1a65f8a7c27c7f8621cd0bbe5505acca22aece232cb5556eb72c3d0444078"} Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.440397 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwl6v" event={"ID":"aed619dc-ef21-4b05-ad8b-1fe65d151661","Type":"ContainerStarted","Data":"ad819a6dc3fa066f060d584767d4a31a45ddc97ba5842a4292bc644253801abc"} Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.688800 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.741591 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-util\") pod \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\" (UID: \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\") " Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.741703 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmbpg\" (UniqueName: \"kubernetes.io/projected/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-kube-api-access-jmbpg\") pod \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\" (UID: \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\") " Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.741923 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-bundle\") pod \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\" (UID: \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\") " Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.752197 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-kube-api-access-jmbpg" (OuterVolumeSpecName: "kube-api-access-jmbpg") pod "de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" (UID: "de2d9dc5-eee5-4e7f-86dd-9b7eb581429e"). InnerVolumeSpecName "kube-api-access-jmbpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.752428 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-bundle" (OuterVolumeSpecName: "bundle") pod "de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" (UID: "de2d9dc5-eee5-4e7f-86dd-9b7eb581429e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.776018 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-util" (OuterVolumeSpecName: "util") pod "de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" (UID: "de2d9dc5-eee5-4e7f-86dd-9b7eb581429e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.843584 4751 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.843787 4751 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-util\") on node \"crc\" DevicePath \"\"" Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.843797 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmbpg\" (UniqueName: \"kubernetes.io/projected/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-kube-api-access-jmbpg\") on node \"crc\" DevicePath \"\"" Jan 30 21:27:43 crc kubenswrapper[4751]: I0130 21:27:43.447951 4751 generic.go:334] "Generic (PLEG): container finished" podID="1cbca202-59f0-4772-a82c-8c448cbc4c70" containerID="cde0bba9b5bd705e79427c82f01801f8fb8f078a030c2d0c0c73c34abe57027a" exitCode=0 Jan 30 21:27:43 crc kubenswrapper[4751]: I0130 21:27:43.448019 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" event={"ID":"1cbca202-59f0-4772-a82c-8c448cbc4c70","Type":"ContainerDied","Data":"cde0bba9b5bd705e79427c82f01801f8fb8f078a030c2d0c0c73c34abe57027a"} Jan 30 21:27:43 crc kubenswrapper[4751]: I0130 21:27:43.449463 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwl6v" event={"ID":"aed619dc-ef21-4b05-ad8b-1fe65d151661","Type":"ContainerStarted","Data":"66d25b52a89b8f927360bf7fff46b9e8c776909bb76d8cf550ff83e5244a7658"} Jan 30 21:27:43 crc kubenswrapper[4751]: I0130 21:27:43.451526 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" event={"ID":"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e","Type":"ContainerDied","Data":"afce932094578d4622ffbd64cb7aa71b797cb261dfa58b636da404a5ddeda537"} Jan 30 21:27:43 crc kubenswrapper[4751]: I0130 21:27:43.451568 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afce932094578d4622ffbd64cb7aa71b797cb261dfa58b636da404a5ddeda537" Jan 30 21:27:43 crc kubenswrapper[4751]: I0130 21:27:43.451570 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" Jan 30 21:27:44 crc kubenswrapper[4751]: I0130 21:27:44.460174 4751 generic.go:334] "Generic (PLEG): container finished" podID="aed619dc-ef21-4b05-ad8b-1fe65d151661" containerID="66d25b52a89b8f927360bf7fff46b9e8c776909bb76d8cf550ff83e5244a7658" exitCode=0 Jan 30 21:27:44 crc kubenswrapper[4751]: I0130 21:27:44.460259 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwl6v" event={"ID":"aed619dc-ef21-4b05-ad8b-1fe65d151661","Type":"ContainerDied","Data":"66d25b52a89b8f927360bf7fff46b9e8c776909bb76d8cf550ff83e5244a7658"} Jan 30 21:27:44 crc kubenswrapper[4751]: I0130 21:27:44.714228 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" Jan 30 21:27:44 crc kubenswrapper[4751]: I0130 21:27:44.774025 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cbca202-59f0-4772-a82c-8c448cbc4c70-util\") pod \"1cbca202-59f0-4772-a82c-8c448cbc4c70\" (UID: \"1cbca202-59f0-4772-a82c-8c448cbc4c70\") " Jan 30 21:27:44 crc kubenswrapper[4751]: I0130 21:27:44.774112 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgl6g\" (UniqueName: \"kubernetes.io/projected/1cbca202-59f0-4772-a82c-8c448cbc4c70-kube-api-access-kgl6g\") pod \"1cbca202-59f0-4772-a82c-8c448cbc4c70\" (UID: \"1cbca202-59f0-4772-a82c-8c448cbc4c70\") " Jan 30 21:27:44 crc kubenswrapper[4751]: I0130 21:27:44.774299 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cbca202-59f0-4772-a82c-8c448cbc4c70-bundle\") pod \"1cbca202-59f0-4772-a82c-8c448cbc4c70\" (UID: \"1cbca202-59f0-4772-a82c-8c448cbc4c70\") " Jan 30 21:27:44 crc kubenswrapper[4751]: I0130 21:27:44.776681 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cbca202-59f0-4772-a82c-8c448cbc4c70-bundle" (OuterVolumeSpecName: "bundle") pod "1cbca202-59f0-4772-a82c-8c448cbc4c70" (UID: "1cbca202-59f0-4772-a82c-8c448cbc4c70"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:27:44 crc kubenswrapper[4751]: I0130 21:27:44.784403 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cbca202-59f0-4772-a82c-8c448cbc4c70-kube-api-access-kgl6g" (OuterVolumeSpecName: "kube-api-access-kgl6g") pod "1cbca202-59f0-4772-a82c-8c448cbc4c70" (UID: "1cbca202-59f0-4772-a82c-8c448cbc4c70"). InnerVolumeSpecName "kube-api-access-kgl6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:27:44 crc kubenswrapper[4751]: I0130 21:27:44.785682 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cbca202-59f0-4772-a82c-8c448cbc4c70-util" (OuterVolumeSpecName: "util") pod "1cbca202-59f0-4772-a82c-8c448cbc4c70" (UID: "1cbca202-59f0-4772-a82c-8c448cbc4c70"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:27:44 crc kubenswrapper[4751]: I0130 21:27:44.876450 4751 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cbca202-59f0-4772-a82c-8c448cbc4c70-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:27:44 crc kubenswrapper[4751]: I0130 21:27:44.876507 4751 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cbca202-59f0-4772-a82c-8c448cbc4c70-util\") on node \"crc\" DevicePath \"\"" Jan 30 21:27:44 crc kubenswrapper[4751]: I0130 21:27:44.876518 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgl6g\" (UniqueName: \"kubernetes.io/projected/1cbca202-59f0-4772-a82c-8c448cbc4c70-kube-api-access-kgl6g\") on node \"crc\" DevicePath \"\"" Jan 30 21:27:45 crc kubenswrapper[4751]: I0130 21:27:45.473766 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwl6v" event={"ID":"aed619dc-ef21-4b05-ad8b-1fe65d151661","Type":"ContainerStarted","Data":"c6ce5ed14895f5b93da23984b65fd10bfc6edb5544dcd49c545fcfd5fdc7a36b"} Jan 30 21:27:45 crc kubenswrapper[4751]: I0130 21:27:45.478153 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" event={"ID":"1cbca202-59f0-4772-a82c-8c448cbc4c70","Type":"ContainerDied","Data":"00f1560fa0698e5c3dfd7ffc5943ed55e1b54b817660be0d0ca6c1e47446f938"} Jan 30 21:27:45 crc kubenswrapper[4751]: I0130 21:27:45.478203 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00f1560fa0698e5c3dfd7ffc5943ed55e1b54b817660be0d0ca6c1e47446f938" Jan 30 21:27:45 crc kubenswrapper[4751]: I0130 21:27:45.478278 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" Jan 30 21:27:45 crc kubenswrapper[4751]: I0130 21:27:45.506612 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cwl6v" podStartSLOduration=2.043978496 podStartE2EDuration="4.506592811s" podCreationTimestamp="2026-01-30 21:27:41 +0000 UTC" firstStartedPulling="2026-01-30 21:27:42.44123011 +0000 UTC m=+801.187052759" lastFinishedPulling="2026-01-30 21:27:44.903844425 +0000 UTC m=+803.649667074" observedRunningTime="2026-01-30 21:27:45.504592078 +0000 UTC m=+804.250414737" watchObservedRunningTime="2026-01-30 21:27:45.506592811 +0000 UTC m=+804.252415460" Jan 30 21:27:51 crc kubenswrapper[4751]: I0130 21:27:51.398528 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:27:51 crc kubenswrapper[4751]: I0130 21:27:51.399018 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.470586 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cwl6v" podUID="aed619dc-ef21-4b05-ad8b-1fe65d151661" containerName="registry-server" probeResult="failure" output=< Jan 30 21:27:52 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:27:52 crc kubenswrapper[4751]: > Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.566512 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h"] Jan 30 21:27:52 crc kubenswrapper[4751]: E0130 21:27:52.566789 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cbca202-59f0-4772-a82c-8c448cbc4c70" containerName="extract" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.566803 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cbca202-59f0-4772-a82c-8c448cbc4c70" containerName="extract" Jan 30 21:27:52 crc kubenswrapper[4751]: E0130 21:27:52.566815 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cbca202-59f0-4772-a82c-8c448cbc4c70" containerName="util" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.566821 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cbca202-59f0-4772-a82c-8c448cbc4c70" containerName="util" Jan 30 21:27:52 crc kubenswrapper[4751]: E0130 21:27:52.566829 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" containerName="extract" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.566834 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" containerName="extract" Jan 30 21:27:52 crc kubenswrapper[4751]: E0130 21:27:52.566844 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cbca202-59f0-4772-a82c-8c448cbc4c70" containerName="pull" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.566850 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cbca202-59f0-4772-a82c-8c448cbc4c70" containerName="pull" Jan 30 21:27:52 crc kubenswrapper[4751]: E0130 21:27:52.566857 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" containerName="pull" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.566862 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" containerName="pull" Jan 30 21:27:52 crc kubenswrapper[4751]: E0130 21:27:52.566872 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" containerName="util" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.566877 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" containerName="util" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.567009 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" containerName="extract" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.567021 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cbca202-59f0-4772-a82c-8c448cbc4c70" containerName="extract" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.567750 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:52 crc kubenswrapper[4751]: W0130 21:27:52.571023 4751 reflector.go:561] object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert": failed to list *v1.Secret: secrets "loki-operator-controller-manager-service-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-operators-redhat": no relationship found between node 'crc' and this object Jan 30 21:27:52 crc kubenswrapper[4751]: E0130 21:27:52.571074 4751 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators-redhat\"/\"loki-operator-controller-manager-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"loki-operator-controller-manager-service-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-operators-redhat\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:27:52 crc kubenswrapper[4751]: W0130 21:27:52.571170 4751 reflector.go:561] object-"openshift-operators-redhat"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-operators-redhat": no relationship found between node 'crc' and this object Jan 30 21:27:52 crc kubenswrapper[4751]: E0130 21:27:52.571188 4751 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators-redhat\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-operators-redhat\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:27:52 crc kubenswrapper[4751]: W0130 21:27:52.571294 4751 reflector.go:561] object-"openshift-operators-redhat"/"loki-operator-manager-config": failed to list *v1.ConfigMap: configmaps "loki-operator-manager-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-operators-redhat": no relationship found between node 'crc' and this object Jan 30 21:27:52 crc kubenswrapper[4751]: E0130 21:27:52.571312 4751 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators-redhat\"/\"loki-operator-manager-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"loki-operator-manager-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-operators-redhat\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.571502 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Jan 30 21:27:52 crc kubenswrapper[4751]: W0130 21:27:52.571668 4751 reflector.go:561] object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-4v65n": failed to list *v1.Secret: secrets "loki-operator-controller-manager-dockercfg-4v65n" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-operators-redhat": no relationship found between node 'crc' and this object Jan 30 21:27:52 crc kubenswrapper[4751]: E0130 21:27:52.571688 4751 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators-redhat\"/\"loki-operator-controller-manager-dockercfg-4v65n\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"loki-operator-controller-manager-dockercfg-4v65n\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-operators-redhat\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.572296 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.601560 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h"] Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.735912 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d32a4de7-a9b5-408d-b678-bcc0244cceee-apiservice-cert\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.735985 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d32a4de7-a9b5-408d-b678-bcc0244cceee-webhook-cert\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.736207 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/d32a4de7-a9b5-408d-b678-bcc0244cceee-manager-config\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.736254 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsjzj\" (UniqueName: \"kubernetes.io/projected/d32a4de7-a9b5-408d-b678-bcc0244cceee-kube-api-access-lsjzj\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.736364 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d32a4de7-a9b5-408d-b678-bcc0244cceee-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.837949 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d32a4de7-a9b5-408d-b678-bcc0244cceee-webhook-cert\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.838051 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsjzj\" (UniqueName: \"kubernetes.io/projected/d32a4de7-a9b5-408d-b678-bcc0244cceee-kube-api-access-lsjzj\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.838075 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/d32a4de7-a9b5-408d-b678-bcc0244cceee-manager-config\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.838110 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d32a4de7-a9b5-408d-b678-bcc0244cceee-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.838164 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d32a4de7-a9b5-408d-b678-bcc0244cceee-apiservice-cert\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.844063 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d32a4de7-a9b5-408d-b678-bcc0244cceee-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:53 crc kubenswrapper[4751]: I0130 21:27:53.671902 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Jan 30 21:27:53 crc kubenswrapper[4751]: I0130 21:27:53.686006 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsjzj\" (UniqueName: \"kubernetes.io/projected/d32a4de7-a9b5-408d-b678-bcc0244cceee-kube-api-access-lsjzj\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:53 crc kubenswrapper[4751]: I0130 21:27:53.747502 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Jan 30 21:27:53 crc kubenswrapper[4751]: I0130 21:27:53.748346 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-4v65n" Jan 30 21:27:53 crc kubenswrapper[4751]: I0130 21:27:53.755031 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d32a4de7-a9b5-408d-b678-bcc0244cceee-apiservice-cert\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:53 crc kubenswrapper[4751]: I0130 21:27:53.768962 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d32a4de7-a9b5-408d-b678-bcc0244cceee-webhook-cert\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:53 crc kubenswrapper[4751]: I0130 21:27:53.778047 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Jan 30 21:27:53 crc kubenswrapper[4751]: I0130 21:27:53.779769 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/d32a4de7-a9b5-408d-b678-bcc0244cceee-manager-config\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:53 crc kubenswrapper[4751]: I0130 21:27:53.794036 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:54 crc kubenswrapper[4751]: I0130 21:27:54.139895 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h"] Jan 30 21:27:54 crc kubenswrapper[4751]: I0130 21:27:54.540616 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" event={"ID":"d32a4de7-a9b5-408d-b678-bcc0244cceee","Type":"ContainerStarted","Data":"7a3f6ad980d5cd5f632a1cbac28d9b569f5edb511576da6618a11519b6fa03b3"} Jan 30 21:27:56 crc kubenswrapper[4751]: I0130 21:27:56.861581 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-tg4r2"] Jan 30 21:27:56 crc kubenswrapper[4751]: I0130 21:27:56.862773 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-tg4r2" Jan 30 21:27:56 crc kubenswrapper[4751]: I0130 21:27:56.867693 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-vmc5w" Jan 30 21:27:56 crc kubenswrapper[4751]: I0130 21:27:56.867743 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Jan 30 21:27:56 crc kubenswrapper[4751]: I0130 21:27:56.867807 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Jan 30 21:27:56 crc kubenswrapper[4751]: I0130 21:27:56.870124 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-tg4r2"] Jan 30 21:27:57 crc kubenswrapper[4751]: I0130 21:27:57.003038 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msml8\" (UniqueName: \"kubernetes.io/projected/c60111a8-d193-4bbb-af4b-a5f286a4b04b-kube-api-access-msml8\") pod \"cluster-logging-operator-79cf69ddc8-tg4r2\" (UID: \"c60111a8-d193-4bbb-af4b-a5f286a4b04b\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-tg4r2" Jan 30 21:27:57 crc kubenswrapper[4751]: I0130 21:27:57.104166 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msml8\" (UniqueName: \"kubernetes.io/projected/c60111a8-d193-4bbb-af4b-a5f286a4b04b-kube-api-access-msml8\") pod \"cluster-logging-operator-79cf69ddc8-tg4r2\" (UID: \"c60111a8-d193-4bbb-af4b-a5f286a4b04b\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-tg4r2" Jan 30 21:27:57 crc kubenswrapper[4751]: I0130 21:27:57.123265 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msml8\" (UniqueName: \"kubernetes.io/projected/c60111a8-d193-4bbb-af4b-a5f286a4b04b-kube-api-access-msml8\") pod \"cluster-logging-operator-79cf69ddc8-tg4r2\" (UID: \"c60111a8-d193-4bbb-af4b-a5f286a4b04b\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-tg4r2" Jan 30 21:27:57 crc kubenswrapper[4751]: I0130 21:27:57.184259 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-tg4r2" Jan 30 21:27:57 crc kubenswrapper[4751]: I0130 21:27:57.646154 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-tg4r2"] Jan 30 21:27:58 crc kubenswrapper[4751]: I0130 21:27:58.570442 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-tg4r2" event={"ID":"c60111a8-d193-4bbb-af4b-a5f286a4b04b","Type":"ContainerStarted","Data":"3548e818bc9718757d736b9510d032b65fa1385801e974033185f0e5afed0bdd"} Jan 30 21:28:00 crc kubenswrapper[4751]: I0130 21:28:00.594223 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" event={"ID":"d32a4de7-a9b5-408d-b678-bcc0244cceee","Type":"ContainerStarted","Data":"df93b784a76efee2e8ad28b15daa3ff6b5bfbd86e80b881c2b7ca52f493660dc"} Jan 30 21:28:01 crc kubenswrapper[4751]: I0130 21:28:01.478623 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:28:01 crc kubenswrapper[4751]: I0130 21:28:01.535937 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:28:04 crc kubenswrapper[4751]: I0130 21:28:04.430233 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cwl6v"] Jan 30 21:28:04 crc kubenswrapper[4751]: I0130 21:28:04.431216 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cwl6v" podUID="aed619dc-ef21-4b05-ad8b-1fe65d151661" containerName="registry-server" containerID="cri-o://c6ce5ed14895f5b93da23984b65fd10bfc6edb5544dcd49c545fcfd5fdc7a36b" gracePeriod=2 Jan 30 21:28:04 crc kubenswrapper[4751]: I0130 21:28:04.627891 4751 generic.go:334] "Generic (PLEG): container finished" podID="aed619dc-ef21-4b05-ad8b-1fe65d151661" containerID="c6ce5ed14895f5b93da23984b65fd10bfc6edb5544dcd49c545fcfd5fdc7a36b" exitCode=0 Jan 30 21:28:04 crc kubenswrapper[4751]: I0130 21:28:04.627938 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwl6v" event={"ID":"aed619dc-ef21-4b05-ad8b-1fe65d151661","Type":"ContainerDied","Data":"c6ce5ed14895f5b93da23984b65fd10bfc6edb5544dcd49c545fcfd5fdc7a36b"} Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.612651 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.661751 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwl6v" event={"ID":"aed619dc-ef21-4b05-ad8b-1fe65d151661","Type":"ContainerDied","Data":"ad819a6dc3fa066f060d584767d4a31a45ddc97ba5842a4292bc644253801abc"} Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.661864 4751 scope.go:117] "RemoveContainer" containerID="c6ce5ed14895f5b93da23984b65fd10bfc6edb5544dcd49c545fcfd5fdc7a36b" Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.661962 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.672942 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aed619dc-ef21-4b05-ad8b-1fe65d151661-utilities\") pod \"aed619dc-ef21-4b05-ad8b-1fe65d151661\" (UID: \"aed619dc-ef21-4b05-ad8b-1fe65d151661\") " Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.672983 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aed619dc-ef21-4b05-ad8b-1fe65d151661-catalog-content\") pod \"aed619dc-ef21-4b05-ad8b-1fe65d151661\" (UID: \"aed619dc-ef21-4b05-ad8b-1fe65d151661\") " Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.673020 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd9ts\" (UniqueName: \"kubernetes.io/projected/aed619dc-ef21-4b05-ad8b-1fe65d151661-kube-api-access-qd9ts\") pod \"aed619dc-ef21-4b05-ad8b-1fe65d151661\" (UID: \"aed619dc-ef21-4b05-ad8b-1fe65d151661\") " Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.673703 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aed619dc-ef21-4b05-ad8b-1fe65d151661-utilities" (OuterVolumeSpecName: "utilities") pod "aed619dc-ef21-4b05-ad8b-1fe65d151661" (UID: "aed619dc-ef21-4b05-ad8b-1fe65d151661"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.679173 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aed619dc-ef21-4b05-ad8b-1fe65d151661-kube-api-access-qd9ts" (OuterVolumeSpecName: "kube-api-access-qd9ts") pod "aed619dc-ef21-4b05-ad8b-1fe65d151661" (UID: "aed619dc-ef21-4b05-ad8b-1fe65d151661"). InnerVolumeSpecName "kube-api-access-qd9ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.714927 4751 scope.go:117] "RemoveContainer" containerID="66d25b52a89b8f927360bf7fff46b9e8c776909bb76d8cf550ff83e5244a7658" Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.741715 4751 scope.go:117] "RemoveContainer" containerID="ffd1a65f8a7c27c7f8621cd0bbe5505acca22aece232cb5556eb72c3d0444078" Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.774939 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aed619dc-ef21-4b05-ad8b-1fe65d151661-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.774977 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd9ts\" (UniqueName: \"kubernetes.io/projected/aed619dc-ef21-4b05-ad8b-1fe65d151661-kube-api-access-qd9ts\") on node \"crc\" DevicePath \"\"" Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.826523 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aed619dc-ef21-4b05-ad8b-1fe65d151661-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aed619dc-ef21-4b05-ad8b-1fe65d151661" (UID: "aed619dc-ef21-4b05-ad8b-1fe65d151661"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.876017 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aed619dc-ef21-4b05-ad8b-1fe65d151661-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.987192 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cwl6v"] Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.992424 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cwl6v"] Jan 30 21:28:08 crc kubenswrapper[4751]: I0130 21:28:08.672205 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-tg4r2" event={"ID":"c60111a8-d193-4bbb-af4b-a5f286a4b04b","Type":"ContainerStarted","Data":"8207320acaa5760c14d4e65eab12252def8c72fb4e30567f5ca0b21356489b91"} Jan 30 21:28:08 crc kubenswrapper[4751]: I0130 21:28:08.676173 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" event={"ID":"d32a4de7-a9b5-408d-b678-bcc0244cceee","Type":"ContainerStarted","Data":"d27fe29ee58b484cf73e7c5963ecb88e74717eb36fc52410ad47070f6dea6475"} Jan 30 21:28:08 crc kubenswrapper[4751]: I0130 21:28:08.677064 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:28:08 crc kubenswrapper[4751]: I0130 21:28:08.681071 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:28:08 crc kubenswrapper[4751]: I0130 21:28:08.749269 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" podStartSLOduration=3.245714837 podStartE2EDuration="16.749242003s" podCreationTimestamp="2026-01-30 21:27:52 +0000 UTC" firstStartedPulling="2026-01-30 21:27:54.156342387 +0000 UTC m=+812.902165036" lastFinishedPulling="2026-01-30 21:28:07.659869553 +0000 UTC m=+826.405692202" observedRunningTime="2026-01-30 21:28:08.743699235 +0000 UTC m=+827.489521884" watchObservedRunningTime="2026-01-30 21:28:08.749242003 +0000 UTC m=+827.495064692" Jan 30 21:28:08 crc kubenswrapper[4751]: I0130 21:28:08.750133 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-tg4r2" podStartSLOduration=2.845247271 podStartE2EDuration="12.750125846s" podCreationTimestamp="2026-01-30 21:27:56 +0000 UTC" firstStartedPulling="2026-01-30 21:27:57.654638326 +0000 UTC m=+816.400460975" lastFinishedPulling="2026-01-30 21:28:07.559516901 +0000 UTC m=+826.305339550" observedRunningTime="2026-01-30 21:28:08.699852842 +0000 UTC m=+827.445675491" watchObservedRunningTime="2026-01-30 21:28:08.750125846 +0000 UTC m=+827.495948515" Jan 30 21:28:09 crc kubenswrapper[4751]: I0130 21:28:09.983012 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aed619dc-ef21-4b05-ad8b-1fe65d151661" path="/var/lib/kubelet/pods/aed619dc-ef21-4b05-ad8b-1fe65d151661/volumes" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.218647 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Jan 30 21:28:13 crc kubenswrapper[4751]: E0130 21:28:13.220048 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aed619dc-ef21-4b05-ad8b-1fe65d151661" containerName="extract-content" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.220080 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="aed619dc-ef21-4b05-ad8b-1fe65d151661" containerName="extract-content" Jan 30 21:28:13 crc kubenswrapper[4751]: E0130 21:28:13.220109 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aed619dc-ef21-4b05-ad8b-1fe65d151661" containerName="extract-utilities" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.220128 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="aed619dc-ef21-4b05-ad8b-1fe65d151661" containerName="extract-utilities" Jan 30 21:28:13 crc kubenswrapper[4751]: E0130 21:28:13.220169 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aed619dc-ef21-4b05-ad8b-1fe65d151661" containerName="registry-server" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.220186 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="aed619dc-ef21-4b05-ad8b-1fe65d151661" containerName="registry-server" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.220495 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="aed619dc-ef21-4b05-ad8b-1fe65d151661" containerName="registry-server" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.221471 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.225514 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.227172 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.231320 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.258769 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1d3d2be3-99fb-4325-9215-bf9f53e6bb15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1d3d2be3-99fb-4325-9215-bf9f53e6bb15\") pod \"minio\" (UID: \"84e1b505-9173-4068-a585-830aa617354d\") " pod="minio-dev/minio" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.258868 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6h4w\" (UniqueName: \"kubernetes.io/projected/84e1b505-9173-4068-a585-830aa617354d-kube-api-access-b6h4w\") pod \"minio\" (UID: \"84e1b505-9173-4068-a585-830aa617354d\") " pod="minio-dev/minio" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.360502 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6h4w\" (UniqueName: \"kubernetes.io/projected/84e1b505-9173-4068-a585-830aa617354d-kube-api-access-b6h4w\") pod \"minio\" (UID: \"84e1b505-9173-4068-a585-830aa617354d\") " pod="minio-dev/minio" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.360639 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1d3d2be3-99fb-4325-9215-bf9f53e6bb15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1d3d2be3-99fb-4325-9215-bf9f53e6bb15\") pod \"minio\" (UID: \"84e1b505-9173-4068-a585-830aa617354d\") " pod="minio-dev/minio" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.367867 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.367901 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1d3d2be3-99fb-4325-9215-bf9f53e6bb15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1d3d2be3-99fb-4325-9215-bf9f53e6bb15\") pod \"minio\" (UID: \"84e1b505-9173-4068-a585-830aa617354d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5f395a8c08ded556242a2d037122118d8262ff0d7d91a036d1a811634d4c5f87/globalmount\"" pod="minio-dev/minio" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.387484 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6h4w\" (UniqueName: \"kubernetes.io/projected/84e1b505-9173-4068-a585-830aa617354d-kube-api-access-b6h4w\") pod \"minio\" (UID: \"84e1b505-9173-4068-a585-830aa617354d\") " pod="minio-dev/minio" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.408793 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1d3d2be3-99fb-4325-9215-bf9f53e6bb15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1d3d2be3-99fb-4325-9215-bf9f53e6bb15\") pod \"minio\" (UID: \"84e1b505-9173-4068-a585-830aa617354d\") " pod="minio-dev/minio" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.549732 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 30 21:28:14 crc kubenswrapper[4751]: I0130 21:28:14.024137 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 30 21:28:14 crc kubenswrapper[4751]: W0130 21:28:14.028077 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84e1b505_9173_4068_a585_830aa617354d.slice/crio-0991fae8c447b2cb8b06df8b33b5ae286ac35ee958656bac86239815181ed109 WatchSource:0}: Error finding container 0991fae8c447b2cb8b06df8b33b5ae286ac35ee958656bac86239815181ed109: Status 404 returned error can't find the container with id 0991fae8c447b2cb8b06df8b33b5ae286ac35ee958656bac86239815181ed109 Jan 30 21:28:14 crc kubenswrapper[4751]: I0130 21:28:14.740417 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"84e1b505-9173-4068-a585-830aa617354d","Type":"ContainerStarted","Data":"0991fae8c447b2cb8b06df8b33b5ae286ac35ee958656bac86239815181ed109"} Jan 30 21:28:17 crc kubenswrapper[4751]: I0130 21:28:17.764514 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"84e1b505-9173-4068-a585-830aa617354d","Type":"ContainerStarted","Data":"6e986e869fc40809622c325f22953fba262471727cfcc8ccb5d9b118692c29a5"} Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.628778 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=9.572444439 podStartE2EDuration="12.628761068s" podCreationTimestamp="2026-01-30 21:28:11 +0000 UTC" firstStartedPulling="2026-01-30 21:28:14.031264108 +0000 UTC m=+832.777086767" lastFinishedPulling="2026-01-30 21:28:17.087580737 +0000 UTC m=+835.833403396" observedRunningTime="2026-01-30 21:28:17.783827323 +0000 UTC m=+836.529649982" watchObservedRunningTime="2026-01-30 21:28:23.628761068 +0000 UTC m=+842.374583727" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.633714 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc"] Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.634589 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.640940 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.641197 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-x85pc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.641467 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.641630 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.648115 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.659186 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc"] Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.732586 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d066c155-02e0-448e-9d4c-f578a36e553b-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.732638 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d066c155-02e0-448e-9d4c-f578a36e553b-config\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.732661 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/d066c155-02e0-448e-9d4c-f578a36e553b-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.732762 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pshk\" (UniqueName: \"kubernetes.io/projected/d066c155-02e0-448e-9d4c-f578a36e553b-kube-api-access-6pshk\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.732855 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/d066c155-02e0-448e-9d4c-f578a36e553b-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.797144 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-gbf6p"] Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.798014 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.800159 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.800465 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.800637 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.819793 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-gbf6p"] Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.839152 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/d066c155-02e0-448e-9d4c-f578a36e553b-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.839246 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/096a86f8-72dc-4bd5-a2b4-48b67a26d792-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.839412 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh9r9\" (UniqueName: \"kubernetes.io/projected/096a86f8-72dc-4bd5-a2b4-48b67a26d792-kube-api-access-zh9r9\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.839469 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d066c155-02e0-448e-9d4c-f578a36e553b-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.839497 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/096a86f8-72dc-4bd5-a2b4-48b67a26d792-config\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.839520 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/096a86f8-72dc-4bd5-a2b4-48b67a26d792-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.839571 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d066c155-02e0-448e-9d4c-f578a36e553b-config\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.839594 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/096a86f8-72dc-4bd5-a2b4-48b67a26d792-logging-loki-s3\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.839648 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/d066c155-02e0-448e-9d4c-f578a36e553b-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.839667 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/096a86f8-72dc-4bd5-a2b4-48b67a26d792-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.839735 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pshk\" (UniqueName: \"kubernetes.io/projected/d066c155-02e0-448e-9d4c-f578a36e553b-kube-api-access-6pshk\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.841283 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d066c155-02e0-448e-9d4c-f578a36e553b-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.842112 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d066c155-02e0-448e-9d4c-f578a36e553b-config\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.848819 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/d066c155-02e0-448e-9d4c-f578a36e553b-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.862134 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/d066c155-02e0-448e-9d4c-f578a36e553b-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.869673 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pshk\" (UniqueName: \"kubernetes.io/projected/d066c155-02e0-448e-9d4c-f578a36e553b-kube-api-access-6pshk\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.908648 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9"] Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.909551 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.912037 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.912239 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.918238 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9"] Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.940990 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/096a86f8-72dc-4bd5-a2b4-48b67a26d792-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.941056 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh9r9\" (UniqueName: \"kubernetes.io/projected/096a86f8-72dc-4bd5-a2b4-48b67a26d792-kube-api-access-zh9r9\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.941102 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/096a86f8-72dc-4bd5-a2b4-48b67a26d792-config\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.941132 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/096a86f8-72dc-4bd5-a2b4-48b67a26d792-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.941163 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/096a86f8-72dc-4bd5-a2b4-48b67a26d792-logging-loki-s3\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.941190 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/096a86f8-72dc-4bd5-a2b4-48b67a26d792-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.942838 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/096a86f8-72dc-4bd5-a2b4-48b67a26d792-config\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.945282 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/096a86f8-72dc-4bd5-a2b4-48b67a26d792-logging-loki-s3\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.945361 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/096a86f8-72dc-4bd5-a2b4-48b67a26d792-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.952127 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.962645 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh9r9\" (UniqueName: \"kubernetes.io/projected/096a86f8-72dc-4bd5-a2b4-48b67a26d792-kube-api-access-zh9r9\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.962652 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/096a86f8-72dc-4bd5-a2b4-48b67a26d792-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.963132 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/096a86f8-72dc-4bd5-a2b4-48b67a26d792-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.043048 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8083b036-5700-420a-ad3f-1e471813194e-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.043114 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/8083b036-5700-420a-ad3f-1e471813194e-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.043159 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh684\" (UniqueName: \"kubernetes.io/projected/8083b036-5700-420a-ad3f-1e471813194e-kube-api-access-rh684\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.043189 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8083b036-5700-420a-ad3f-1e471813194e-config\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.043219 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/8083b036-5700-420a-ad3f-1e471813194e-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.060403 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr"] Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.061516 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.065183 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.065384 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.065551 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.065697 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-zkphp" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.065792 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.065895 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.073195 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq"] Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.075051 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.093858 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr"] Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.106559 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq"] Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.114705 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.146664 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8083b036-5700-420a-ad3f-1e471813194e-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.149063 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8083b036-5700-420a-ad3f-1e471813194e-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.149399 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/8083b036-5700-420a-ad3f-1e471813194e-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.149471 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh684\" (UniqueName: \"kubernetes.io/projected/8083b036-5700-420a-ad3f-1e471813194e-kube-api-access-rh684\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.149521 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8083b036-5700-420a-ad3f-1e471813194e-config\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.149585 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/8083b036-5700-420a-ad3f-1e471813194e-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.150940 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8083b036-5700-420a-ad3f-1e471813194e-config\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.164640 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/8083b036-5700-420a-ad3f-1e471813194e-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.165644 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh684\" (UniqueName: \"kubernetes.io/projected/8083b036-5700-420a-ad3f-1e471813194e-kube-api-access-rh684\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.169436 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/8083b036-5700-420a-ad3f-1e471813194e-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.238680 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254065 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/326140a4-6f2a-48c1-b5a2-0b02ce345c50-lokistack-gateway\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254110 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7bqv\" (UniqueName: \"kubernetes.io/projected/326140a4-6f2a-48c1-b5a2-0b02ce345c50-kube-api-access-v7bqv\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254135 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs7q4\" (UniqueName: \"kubernetes.io/projected/653268f5-1827-4109-a68b-3cc7670e65f8-kube-api-access-cs7q4\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254150 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/326140a4-6f2a-48c1-b5a2-0b02ce345c50-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254170 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/653268f5-1827-4109-a68b-3cc7670e65f8-tenants\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254198 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/326140a4-6f2a-48c1-b5a2-0b02ce345c50-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254319 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/653268f5-1827-4109-a68b-3cc7670e65f8-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254363 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/653268f5-1827-4109-a68b-3cc7670e65f8-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254379 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/653268f5-1827-4109-a68b-3cc7670e65f8-tls-secret\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254398 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/653268f5-1827-4109-a68b-3cc7670e65f8-lokistack-gateway\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254432 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/326140a4-6f2a-48c1-b5a2-0b02ce345c50-tenants\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254454 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/653268f5-1827-4109-a68b-3cc7670e65f8-rbac\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254472 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/326140a4-6f2a-48c1-b5a2-0b02ce345c50-rbac\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254488 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/326140a4-6f2a-48c1-b5a2-0b02ce345c50-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254505 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/653268f5-1827-4109-a68b-3cc7670e65f8-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254527 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/326140a4-6f2a-48c1-b5a2-0b02ce345c50-tls-secret\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356062 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/326140a4-6f2a-48c1-b5a2-0b02ce345c50-tls-secret\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356119 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/326140a4-6f2a-48c1-b5a2-0b02ce345c50-lokistack-gateway\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356136 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7bqv\" (UniqueName: \"kubernetes.io/projected/326140a4-6f2a-48c1-b5a2-0b02ce345c50-kube-api-access-v7bqv\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356155 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/326140a4-6f2a-48c1-b5a2-0b02ce345c50-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356171 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs7q4\" (UniqueName: \"kubernetes.io/projected/653268f5-1827-4109-a68b-3cc7670e65f8-kube-api-access-cs7q4\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356185 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/653268f5-1827-4109-a68b-3cc7670e65f8-tenants\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356203 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/326140a4-6f2a-48c1-b5a2-0b02ce345c50-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356241 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/653268f5-1827-4109-a68b-3cc7670e65f8-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356260 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/653268f5-1827-4109-a68b-3cc7670e65f8-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356276 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/653268f5-1827-4109-a68b-3cc7670e65f8-tls-secret\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356295 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/653268f5-1827-4109-a68b-3cc7670e65f8-lokistack-gateway\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356343 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/326140a4-6f2a-48c1-b5a2-0b02ce345c50-tenants\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356363 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/653268f5-1827-4109-a68b-3cc7670e65f8-rbac\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356382 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/326140a4-6f2a-48c1-b5a2-0b02ce345c50-rbac\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356396 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/326140a4-6f2a-48c1-b5a2-0b02ce345c50-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356413 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/653268f5-1827-4109-a68b-3cc7670e65f8-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.357710 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/653268f5-1827-4109-a68b-3cc7670e65f8-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.363383 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/653268f5-1827-4109-a68b-3cc7670e65f8-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.363736 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/653268f5-1827-4109-a68b-3cc7670e65f8-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.364045 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/326140a4-6f2a-48c1-b5a2-0b02ce345c50-tls-secret\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.364722 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/326140a4-6f2a-48c1-b5a2-0b02ce345c50-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.365388 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/653268f5-1827-4109-a68b-3cc7670e65f8-rbac\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.369024 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/326140a4-6f2a-48c1-b5a2-0b02ce345c50-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.369682 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/326140a4-6f2a-48c1-b5a2-0b02ce345c50-lokistack-gateway\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.372815 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/326140a4-6f2a-48c1-b5a2-0b02ce345c50-rbac\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.373374 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/326140a4-6f2a-48c1-b5a2-0b02ce345c50-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.379046 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/653268f5-1827-4109-a68b-3cc7670e65f8-tls-secret\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.385575 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/653268f5-1827-4109-a68b-3cc7670e65f8-lokistack-gateway\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.386086 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/326140a4-6f2a-48c1-b5a2-0b02ce345c50-tenants\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.392891 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/653268f5-1827-4109-a68b-3cc7670e65f8-tenants\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.397559 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7bqv\" (UniqueName: \"kubernetes.io/projected/326140a4-6f2a-48c1-b5a2-0b02ce345c50-kube-api-access-v7bqv\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.401023 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs7q4\" (UniqueName: \"kubernetes.io/projected/653268f5-1827-4109-a68b-3cc7670e65f8-kube-api-access-cs7q4\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.459726 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-gbf6p"] Jan 30 21:28:24 crc kubenswrapper[4751]: W0130 21:28:24.464919 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod096a86f8_72dc_4bd5_a2b4_48b67a26d792.slice/crio-f0a98c18dea34121b5b9774178a735f5d0b1284591395e8229ba68e9360f5f09 WatchSource:0}: Error finding container f0a98c18dea34121b5b9774178a735f5d0b1284591395e8229ba68e9360f5f09: Status 404 returned error can't find the container with id f0a98c18dea34121b5b9774178a735f5d0b1284591395e8229ba68e9360f5f09 Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.502203 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc"] Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.677361 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.690456 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.734265 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9"] Jan 30 21:28:24 crc kubenswrapper[4751]: W0130 21:28:24.744587 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8083b036_5700_420a_ad3f_1e471813194e.slice/crio-2944e54dd81697a117e6129692f5a8574b8e85beab2799f9080c616cf3fdcc54 WatchSource:0}: Error finding container 2944e54dd81697a117e6129692f5a8574b8e85beab2799f9080c616cf3fdcc54: Status 404 returned error can't find the container with id 2944e54dd81697a117e6129692f5a8574b8e85beab2799f9080c616cf3fdcc54 Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.771706 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.772485 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.786233 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.786522 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.797383 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.827037 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" event={"ID":"8083b036-5700-420a-ad3f-1e471813194e","Type":"ContainerStarted","Data":"2944e54dd81697a117e6129692f5a8574b8e85beab2799f9080c616cf3fdcc54"} Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.828102 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" event={"ID":"096a86f8-72dc-4bd5-a2b4-48b67a26d792","Type":"ContainerStarted","Data":"f0a98c18dea34121b5b9774178a735f5d0b1284591395e8229ba68e9360f5f09"} Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.828923 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" event={"ID":"d066c155-02e0-448e-9d4c-f578a36e553b","Type":"ContainerStarted","Data":"8d24abdf2e7ae192934f48cb88b9651e61760a7ee6004118ae7f48385ce382ca"} Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.857119 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.858587 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.863541 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.864241 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.911575 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.969645 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/4f247b61-4ba2-4c4e-8d97-c16900635ddc-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.969708 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.969775 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-482c967c-287e-45a8-9b36-a1858d3f6deb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-482c967c-287e-45a8-9b36-a1858d3f6deb\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.969818 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4f247b61-4ba2-4c4e-8d97-c16900635ddc-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.969850 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-config\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.969901 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.969924 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f247b61-4ba2-4c4e-8d97-c16900635ddc-config\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.969952 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.969981 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0b035f56-8a65-4d76-b020-d0cb55e72851\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0b035f56-8a65-4d76-b020-d0cb55e72851\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.970010 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f7849282-d14c-4ebf-9847-4c96c23ead9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7849282-d14c-4ebf-9847-4c96c23ead9f\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.970034 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnf5b\" (UniqueName: \"kubernetes.io/projected/4f247b61-4ba2-4c4e-8d97-c16900635ddc-kube-api-access-xnf5b\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.970072 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.970094 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f247b61-4ba2-4c4e-8d97-c16900635ddc-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.970135 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptspc\" (UniqueName: \"kubernetes.io/projected/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-kube-api-access-ptspc\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.970155 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/4f247b61-4ba2-4c4e-8d97-c16900635ddc-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.975199 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.976246 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.977827 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.978345 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.993019 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071351 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071394 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-config\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071477 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f247b61-4ba2-4c4e-8d97-c16900635ddc-config\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071523 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f7849282-d14c-4ebf-9847-4c96c23ead9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7849282-d14c-4ebf-9847-4c96c23ead9f\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071542 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071565 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071584 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f247b61-4ba2-4c4e-8d97-c16900635ddc-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071610 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/4f247b61-4ba2-4c4e-8d97-c16900635ddc-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071634 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071653 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071683 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-482c967c-287e-45a8-9b36-a1858d3f6deb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-482c967c-287e-45a8-9b36-a1858d3f6deb\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071701 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4f247b61-4ba2-4c4e-8d97-c16900635ddc-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071731 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071750 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071772 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0b035f56-8a65-4d76-b020-d0cb55e72851\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0b035f56-8a65-4d76-b020-d0cb55e72851\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071792 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnf5b\" (UniqueName: \"kubernetes.io/projected/4f247b61-4ba2-4c4e-8d97-c16900635ddc-kube-api-access-xnf5b\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071813 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-config\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.072016 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.072033 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptspc\" (UniqueName: \"kubernetes.io/projected/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-kube-api-access-ptspc\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.072047 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kqkz\" (UniqueName: \"kubernetes.io/projected/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-kube-api-access-6kqkz\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.072066 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/4f247b61-4ba2-4c4e-8d97-c16900635ddc-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.072084 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7e3292fe-fbaf-4cb3-82db-8bb5a0e7ea0a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7e3292fe-fbaf-4cb3-82db-8bb5a0e7ea0a\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.072380 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f247b61-4ba2-4c4e-8d97-c16900635ddc-config\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.072990 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.073398 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-config\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.073767 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f247b61-4ba2-4c4e-8d97-c16900635ddc-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.075196 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.075253 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f7849282-d14c-4ebf-9847-4c96c23ead9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7849282-d14c-4ebf-9847-4c96c23ead9f\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3e5a898cde78503bcbc69e1309804887a41e9299a5bd2218ae892ae76cb82ed7/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.075374 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.075440 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0b035f56-8a65-4d76-b020-d0cb55e72851\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0b035f56-8a65-4d76-b020-d0cb55e72851\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b845cc0feb948dbb7deae79b7b516e3f66551a47214c7ebc82092ce5df79bd36/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.077472 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.077546 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-482c967c-287e-45a8-9b36-a1858d3f6deb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-482c967c-287e-45a8-9b36-a1858d3f6deb\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7ce2ba0f1123ef06fae8236ed24c90f356dfddc255bd41ef3c9d05613a53ed99/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.078676 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.080190 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.080245 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/4f247b61-4ba2-4c4e-8d97-c16900635ddc-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.088123 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/4f247b61-4ba2-4c4e-8d97-c16900635ddc-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.094544 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.094938 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4f247b61-4ba2-4c4e-8d97-c16900635ddc-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.094978 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnf5b\" (UniqueName: \"kubernetes.io/projected/4f247b61-4ba2-4c4e-8d97-c16900635ddc-kube-api-access-xnf5b\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.096154 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptspc\" (UniqueName: \"kubernetes.io/projected/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-kube-api-access-ptspc\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.109937 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f7849282-d14c-4ebf-9847-4c96c23ead9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7849282-d14c-4ebf-9847-4c96c23ead9f\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.115637 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-482c967c-287e-45a8-9b36-a1858d3f6deb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-482c967c-287e-45a8-9b36-a1858d3f6deb\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.133709 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0b035f56-8a65-4d76-b020-d0cb55e72851\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0b035f56-8a65-4d76-b020-d0cb55e72851\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.153812 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr"] Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.173384 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.173900 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.173974 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-config\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.174000 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.174018 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kqkz\" (UniqueName: \"kubernetes.io/projected/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-kube-api-access-6kqkz\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.174038 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7e3292fe-fbaf-4cb3-82db-8bb5a0e7ea0a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7e3292fe-fbaf-4cb3-82db-8bb5a0e7ea0a\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.174072 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.175871 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.176690 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-config\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.178149 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.178662 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.178808 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.185540 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.185572 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7e3292fe-fbaf-4cb3-82db-8bb5a0e7ea0a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7e3292fe-fbaf-4cb3-82db-8bb5a0e7ea0a\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fa52f9e922426d130780c9aac3e51ba77ffea30778644523bdaa2a2ecdc4e60e/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.190240 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kqkz\" (UniqueName: \"kubernetes.io/projected/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-kube-api-access-6kqkz\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.207412 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.213456 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7e3292fe-fbaf-4cb3-82db-8bb5a0e7ea0a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7e3292fe-fbaf-4cb3-82db-8bb5a0e7ea0a\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.225190 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq"] Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.290544 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.397823 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.538575 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 30 21:28:25 crc kubenswrapper[4751]: W0130 21:28:25.549813 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12f75dd3_7d12_4b19_8e7d_cfef30b3f0ac.slice/crio-ab7e4a0b643be9d17ecad85a9977d935dbb719d2107da3f2995cf2ef9d60c943 WatchSource:0}: Error finding container ab7e4a0b643be9d17ecad85a9977d935dbb719d2107da3f2995cf2ef9d60c943: Status 404 returned error can't find the container with id ab7e4a0b643be9d17ecad85a9977d935dbb719d2107da3f2995cf2ef9d60c943 Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.651204 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 30 21:28:25 crc kubenswrapper[4751]: W0130 21:28:25.659088 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82dfa01d_f00f_4e1c_ab66_d8fbc48eaf76.slice/crio-71386c36ec245b7811ab524647b4ad53724bc23ae264a9d5a21bc01d54027eb3 WatchSource:0}: Error finding container 71386c36ec245b7811ab524647b4ad53724bc23ae264a9d5a21bc01d54027eb3: Status 404 returned error can't find the container with id 71386c36ec245b7811ab524647b4ad53724bc23ae264a9d5a21bc01d54027eb3 Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.843048 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" event={"ID":"653268f5-1827-4109-a68b-3cc7670e65f8","Type":"ContainerStarted","Data":"c6fbd96c4addb2dec56ee8314209c9470e2f3fbf8c73adb954d388e26fe74d5c"} Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.849758 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76","Type":"ContainerStarted","Data":"71386c36ec245b7811ab524647b4ad53724bc23ae264a9d5a21bc01d54027eb3"} Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.853646 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" event={"ID":"326140a4-6f2a-48c1-b5a2-0b02ce345c50","Type":"ContainerStarted","Data":"55ae5758d9de3673fa2ac8febb3c7b25e9432bf5628cda64c986848cadb9da21"} Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.855412 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac","Type":"ContainerStarted","Data":"ab7e4a0b643be9d17ecad85a9977d935dbb719d2107da3f2995cf2ef9d60c943"} Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.857539 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 30 21:28:26 crc kubenswrapper[4751]: I0130 21:28:26.862653 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"4f247b61-4ba2-4c4e-8d97-c16900635ddc","Type":"ContainerStarted","Data":"7c41fc0b98b13bb8dece6969a16850c5d3e819c43ec0be436aef52ce6830ab92"} Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.876615 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" event={"ID":"326140a4-6f2a-48c1-b5a2-0b02ce345c50","Type":"ContainerStarted","Data":"7340c9d6a9351437f97c77676469e7a94e839e93f944efc8e06a8aa6fe947937"} Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.879713 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" event={"ID":"8083b036-5700-420a-ad3f-1e471813194e","Type":"ContainerStarted","Data":"bc527c14d05287d241de1e09879a365c36c5e5456c1b761dbf065ff5b723d2f9"} Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.879866 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.882179 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac","Type":"ContainerStarted","Data":"33c1b45afcaae3d9bc0ff29ee0dc444f6f57d8150232ed6ed60da6485d984e7d"} Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.882726 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.884739 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"4f247b61-4ba2-4c4e-8d97-c16900635ddc","Type":"ContainerStarted","Data":"1a0ee61b6cebab1975995cf31396024a857abdc74a4e9e4d35dccf868f60b7a6"} Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.885356 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.887267 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" event={"ID":"653268f5-1827-4109-a68b-3cc7670e65f8","Type":"ContainerStarted","Data":"d9cab196a72ee1a74d563246067718b01a7cecf17733523f2c73e5ea1bb34de1"} Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.888777 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" event={"ID":"096a86f8-72dc-4bd5-a2b4-48b67a26d792","Type":"ContainerStarted","Data":"a4211917bdb9b586412f60364cd7b7243ff8c2ad371a087663cfdbdeb51d3896"} Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.889273 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.895162 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" event={"ID":"d066c155-02e0-448e-9d4c-f578a36e553b","Type":"ContainerStarted","Data":"ba5282dca37c18d592f18b4c9bbdbc820e9851689266de48bd0b6d9b95b790cb"} Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.895260 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.896791 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76","Type":"ContainerStarted","Data":"d8bee72bccad9b637330d790db23fb146c116df6283a94912bda28b5d5412eaf"} Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.896938 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.904897 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" podStartSLOduration=2.416258354 podStartE2EDuration="5.904870855s" podCreationTimestamp="2026-01-30 21:28:23 +0000 UTC" firstStartedPulling="2026-01-30 21:28:24.747677728 +0000 UTC m=+843.493500377" lastFinishedPulling="2026-01-30 21:28:28.236290229 +0000 UTC m=+846.982112878" observedRunningTime="2026-01-30 21:28:28.902976914 +0000 UTC m=+847.648799563" watchObservedRunningTime="2026-01-30 21:28:28.904870855 +0000 UTC m=+847.650693514" Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.924538 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" podStartSLOduration=2.239709826 podStartE2EDuration="5.92451768s" podCreationTimestamp="2026-01-30 21:28:23 +0000 UTC" firstStartedPulling="2026-01-30 21:28:24.515204156 +0000 UTC m=+843.261026805" lastFinishedPulling="2026-01-30 21:28:28.200012 +0000 UTC m=+846.945834659" observedRunningTime="2026-01-30 21:28:28.919585688 +0000 UTC m=+847.665408337" watchObservedRunningTime="2026-01-30 21:28:28.92451768 +0000 UTC m=+847.670340329" Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.969265 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" podStartSLOduration=2.203128739 podStartE2EDuration="5.969240675s" podCreationTimestamp="2026-01-30 21:28:23 +0000 UTC" firstStartedPulling="2026-01-30 21:28:24.467934143 +0000 UTC m=+843.213756792" lastFinishedPulling="2026-01-30 21:28:28.234046079 +0000 UTC m=+846.979868728" observedRunningTime="2026-01-30 21:28:28.943428945 +0000 UTC m=+847.689251594" watchObservedRunningTime="2026-01-30 21:28:28.969240675 +0000 UTC m=+847.715063344" Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.979062 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.603218001 podStartE2EDuration="5.979038117s" podCreationTimestamp="2026-01-30 21:28:23 +0000 UTC" firstStartedPulling="2026-01-30 21:28:25.865000145 +0000 UTC m=+844.610822794" lastFinishedPulling="2026-01-30 21:28:28.240820261 +0000 UTC m=+846.986642910" observedRunningTime="2026-01-30 21:28:28.965792152 +0000 UTC m=+847.711614811" watchObservedRunningTime="2026-01-30 21:28:28.979038117 +0000 UTC m=+847.724860766" Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.999583 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.426118009 podStartE2EDuration="5.999561925s" podCreationTimestamp="2026-01-30 21:28:23 +0000 UTC" firstStartedPulling="2026-01-30 21:28:25.661268271 +0000 UTC m=+844.407090920" lastFinishedPulling="2026-01-30 21:28:28.234712177 +0000 UTC m=+846.980534836" observedRunningTime="2026-01-30 21:28:28.983159207 +0000 UTC m=+847.728981856" watchObservedRunningTime="2026-01-30 21:28:28.999561925 +0000 UTC m=+847.745384564" Jan 30 21:28:29 crc kubenswrapper[4751]: I0130 21:28:29.005594 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.324192906 podStartE2EDuration="6.005571896s" podCreationTimestamp="2026-01-30 21:28:23 +0000 UTC" firstStartedPulling="2026-01-30 21:28:25.552594387 +0000 UTC m=+844.298417036" lastFinishedPulling="2026-01-30 21:28:28.233973357 +0000 UTC m=+846.979796026" observedRunningTime="2026-01-30 21:28:29.004051905 +0000 UTC m=+847.749874554" watchObservedRunningTime="2026-01-30 21:28:29.005571896 +0000 UTC m=+847.751394545" Jan 30 21:28:30 crc kubenswrapper[4751]: I0130 21:28:30.916260 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" event={"ID":"653268f5-1827-4109-a68b-3cc7670e65f8","Type":"ContainerStarted","Data":"dcbe57c86897fa5801e7854f0c7ed93a8b1bba3fa8aace2d9fcdfac51a32cef6"} Jan 30 21:28:30 crc kubenswrapper[4751]: I0130 21:28:30.916808 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:30 crc kubenswrapper[4751]: I0130 21:28:30.916862 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:30 crc kubenswrapper[4751]: I0130 21:28:30.918611 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" event={"ID":"326140a4-6f2a-48c1-b5a2-0b02ce345c50","Type":"ContainerStarted","Data":"fbf7eb6d5ff5c96ac30c4b488638e2bc644bc299c8e0390cb6700a3c5a4f0a2d"} Jan 30 21:28:30 crc kubenswrapper[4751]: I0130 21:28:30.926636 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:30 crc kubenswrapper[4751]: I0130 21:28:30.932714 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:30 crc kubenswrapper[4751]: I0130 21:28:30.944896 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" podStartSLOduration=1.538743586 podStartE2EDuration="6.944880727s" podCreationTimestamp="2026-01-30 21:28:24 +0000 UTC" firstStartedPulling="2026-01-30 21:28:25.231037614 +0000 UTC m=+843.976860283" lastFinishedPulling="2026-01-30 21:28:30.637174765 +0000 UTC m=+849.382997424" observedRunningTime="2026-01-30 21:28:30.94238348 +0000 UTC m=+849.688206149" watchObservedRunningTime="2026-01-30 21:28:30.944880727 +0000 UTC m=+849.690703376" Jan 30 21:28:30 crc kubenswrapper[4751]: I0130 21:28:30.967268 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" podStartSLOduration=1.504712608 podStartE2EDuration="6.967252975s" podCreationTimestamp="2026-01-30 21:28:24 +0000 UTC" firstStartedPulling="2026-01-30 21:28:25.15900875 +0000 UTC m=+843.904831389" lastFinishedPulling="2026-01-30 21:28:30.621549097 +0000 UTC m=+849.367371756" observedRunningTime="2026-01-30 21:28:30.9621938 +0000 UTC m=+849.708016449" watchObservedRunningTime="2026-01-30 21:28:30.967252975 +0000 UTC m=+849.713075624" Jan 30 21:28:31 crc kubenswrapper[4751]: I0130 21:28:31.929268 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:31 crc kubenswrapper[4751]: I0130 21:28:31.929784 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:31 crc kubenswrapper[4751]: I0130 21:28:31.946921 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:31 crc kubenswrapper[4751]: I0130 21:28:31.948192 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:43 crc kubenswrapper[4751]: I0130 21:28:43.984235 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:44 crc kubenswrapper[4751]: I0130 21:28:44.121354 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:44 crc kubenswrapper[4751]: I0130 21:28:44.245914 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:45 crc kubenswrapper[4751]: I0130 21:28:45.214905 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:45 crc kubenswrapper[4751]: I0130 21:28:45.317884 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:45 crc kubenswrapper[4751]: I0130 21:28:45.405058 4751 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 30 21:28:45 crc kubenswrapper[4751]: I0130 21:28:45.405104 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="4f247b61-4ba2-4c4e-8d97-c16900635ddc" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 21:28:54 crc kubenswrapper[4751]: I0130 21:28:54.126689 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:28:54 crc kubenswrapper[4751]: I0130 21:28:54.127435 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:28:55 crc kubenswrapper[4751]: I0130 21:28:55.417569 4751 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 30 21:28:55 crc kubenswrapper[4751]: I0130 21:28:55.417652 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="4f247b61-4ba2-4c4e-8d97-c16900635ddc" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 21:29:05 crc kubenswrapper[4751]: I0130 21:29:05.404413 4751 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 30 21:29:05 crc kubenswrapper[4751]: I0130 21:29:05.405051 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="4f247b61-4ba2-4c4e-8d97-c16900635ddc" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 21:29:15 crc kubenswrapper[4751]: I0130 21:29:15.402669 4751 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 30 21:29:15 crc kubenswrapper[4751]: I0130 21:29:15.403546 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="4f247b61-4ba2-4c4e-8d97-c16900635ddc" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 21:29:24 crc kubenswrapper[4751]: I0130 21:29:24.127030 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:29:24 crc kubenswrapper[4751]: I0130 21:29:24.127678 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:29:25 crc kubenswrapper[4751]: I0130 21:29:25.403004 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.524085 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-fs9d8"] Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.525908 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.534830 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-fs9d8"] Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.536870 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-jp6fq" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.536974 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.537027 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.536983 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.538165 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.548237 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.570828 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-entrypoint\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.570876 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-token\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.570943 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/b6746a54-2590-4b31-99ef-332ede51c384-datadir\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.570978 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b6746a54-2590-4b31-99ef-332ede51c384-tmp\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.571003 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-config\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.571029 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-metrics\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.571074 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/b6746a54-2590-4b31-99ef-332ede51c384-sa-token\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.571122 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7pq2\" (UniqueName: \"kubernetes.io/projected/b6746a54-2590-4b31-99ef-332ede51c384-kube-api-access-n7pq2\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.571153 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-trusted-ca\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.571185 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-config-openshift-service-cacrt\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.571347 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-syslog-receiver\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.616446 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-fs9d8"] Jan 30 21:29:42 crc kubenswrapper[4751]: E0130 21:29:42.617110 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-n7pq2 metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-fs9d8" podUID="b6746a54-2590-4b31-99ef-332ede51c384" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.672342 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/b6746a54-2590-4b31-99ef-332ede51c384-datadir\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.672401 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b6746a54-2590-4b31-99ef-332ede51c384-tmp\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.672421 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-config\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.672435 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-metrics\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.672474 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/b6746a54-2590-4b31-99ef-332ede51c384-sa-token\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.672504 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7pq2\" (UniqueName: \"kubernetes.io/projected/b6746a54-2590-4b31-99ef-332ede51c384-kube-api-access-n7pq2\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.672529 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-trusted-ca\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.672522 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/b6746a54-2590-4b31-99ef-332ede51c384-datadir\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.673289 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-config\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.672552 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-config-openshift-service-cacrt\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.673435 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-config-openshift-service-cacrt\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.673736 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-syslog-receiver\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.673779 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-token\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.673797 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-entrypoint\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: E0130 21:29:42.673982 4751 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Jan 30 21:29:42 crc kubenswrapper[4751]: E0130 21:29:42.674041 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-syslog-receiver podName:b6746a54-2590-4b31-99ef-332ede51c384 nodeName:}" failed. No retries permitted until 2026-01-30 21:29:43.174025958 +0000 UTC m=+921.919848597 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-syslog-receiver") pod "collector-fs9d8" (UID: "b6746a54-2590-4b31-99ef-332ede51c384") : secret "collector-syslog-receiver" not found Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.674468 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-trusted-ca\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.674583 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-entrypoint\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.677359 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b6746a54-2590-4b31-99ef-332ede51c384-tmp\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.678509 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-token\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.678758 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-metrics\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.695116 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/b6746a54-2590-4b31-99ef-332ede51c384-sa-token\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.695213 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7pq2\" (UniqueName: \"kubernetes.io/projected/b6746a54-2590-4b31-99ef-332ede51c384-kube-api-access-n7pq2\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.186012 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-syslog-receiver\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.194101 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-syslog-receiver\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.558631 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-fs9d8" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.570978 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-fs9d8" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.694931 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-config\") pod \"b6746a54-2590-4b31-99ef-332ede51c384\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.695073 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/b6746a54-2590-4b31-99ef-332ede51c384-sa-token\") pod \"b6746a54-2590-4b31-99ef-332ede51c384\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.695177 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-token\") pod \"b6746a54-2590-4b31-99ef-332ede51c384\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.695215 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-syslog-receiver\") pod \"b6746a54-2590-4b31-99ef-332ede51c384\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.695253 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-config-openshift-service-cacrt\") pod \"b6746a54-2590-4b31-99ef-332ede51c384\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.695355 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7pq2\" (UniqueName: \"kubernetes.io/projected/b6746a54-2590-4b31-99ef-332ede51c384-kube-api-access-n7pq2\") pod \"b6746a54-2590-4b31-99ef-332ede51c384\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.695387 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-trusted-ca\") pod \"b6746a54-2590-4b31-99ef-332ede51c384\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.695437 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-entrypoint\") pod \"b6746a54-2590-4b31-99ef-332ede51c384\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.695465 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/b6746a54-2590-4b31-99ef-332ede51c384-datadir\") pod \"b6746a54-2590-4b31-99ef-332ede51c384\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.695518 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b6746a54-2590-4b31-99ef-332ede51c384-tmp\") pod \"b6746a54-2590-4b31-99ef-332ede51c384\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.695596 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-metrics\") pod \"b6746a54-2590-4b31-99ef-332ede51c384\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.696222 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-config" (OuterVolumeSpecName: "config") pod "b6746a54-2590-4b31-99ef-332ede51c384" (UID: "b6746a54-2590-4b31-99ef-332ede51c384"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.696240 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6746a54-2590-4b31-99ef-332ede51c384-datadir" (OuterVolumeSpecName: "datadir") pod "b6746a54-2590-4b31-99ef-332ede51c384" (UID: "b6746a54-2590-4b31-99ef-332ede51c384"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.696270 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "b6746a54-2590-4b31-99ef-332ede51c384" (UID: "b6746a54-2590-4b31-99ef-332ede51c384"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.696461 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "b6746a54-2590-4b31-99ef-332ede51c384" (UID: "b6746a54-2590-4b31-99ef-332ede51c384"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.697368 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "b6746a54-2590-4b31-99ef-332ede51c384" (UID: "b6746a54-2590-4b31-99ef-332ede51c384"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.700778 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "b6746a54-2590-4b31-99ef-332ede51c384" (UID: "b6746a54-2590-4b31-99ef-332ede51c384"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.702095 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6746a54-2590-4b31-99ef-332ede51c384-tmp" (OuterVolumeSpecName: "tmp") pod "b6746a54-2590-4b31-99ef-332ede51c384" (UID: "b6746a54-2590-4b31-99ef-332ede51c384"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.702131 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-token" (OuterVolumeSpecName: "collector-token") pod "b6746a54-2590-4b31-99ef-332ede51c384" (UID: "b6746a54-2590-4b31-99ef-332ede51c384"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.703234 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6746a54-2590-4b31-99ef-332ede51c384-kube-api-access-n7pq2" (OuterVolumeSpecName: "kube-api-access-n7pq2") pod "b6746a54-2590-4b31-99ef-332ede51c384" (UID: "b6746a54-2590-4b31-99ef-332ede51c384"). InnerVolumeSpecName "kube-api-access-n7pq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.711543 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-metrics" (OuterVolumeSpecName: "metrics") pod "b6746a54-2590-4b31-99ef-332ede51c384" (UID: "b6746a54-2590-4b31-99ef-332ede51c384"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.713857 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6746a54-2590-4b31-99ef-332ede51c384-sa-token" (OuterVolumeSpecName: "sa-token") pod "b6746a54-2590-4b31-99ef-332ede51c384" (UID: "b6746a54-2590-4b31-99ef-332ede51c384"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.797582 4751 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b6746a54-2590-4b31-99ef-332ede51c384-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.797655 4751 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.797686 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.797712 4751 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/b6746a54-2590-4b31-99ef-332ede51c384-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.797739 4751 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.797767 4751 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.797793 4751 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.797813 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7pq2\" (UniqueName: \"kubernetes.io/projected/b6746a54-2590-4b31-99ef-332ede51c384-kube-api-access-n7pq2\") on node \"crc\" DevicePath \"\"" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.797832 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.797849 4751 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-entrypoint\") on node \"crc\" DevicePath \"\"" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.797871 4751 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/b6746a54-2590-4b31-99ef-332ede51c384-datadir\") on node \"crc\" DevicePath \"\"" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.563478 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-fs9d8" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.611305 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-fs9d8"] Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.623739 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-fs9d8"] Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.633386 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-f6llv"] Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.634714 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.638788 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.639010 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.639309 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.639401 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-jp6fq" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.639393 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.646908 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.652548 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-f6llv"] Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.711387 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z2w7\" (UniqueName: \"kubernetes.io/projected/d1f22c66-daa2-4dd7-8394-ceab983464e2-kube-api-access-5z2w7\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.711459 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d1f22c66-daa2-4dd7-8394-ceab983464e2-collector-syslog-receiver\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.711514 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d1f22c66-daa2-4dd7-8394-ceab983464e2-metrics\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.711547 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d1f22c66-daa2-4dd7-8394-ceab983464e2-collector-token\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.711608 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d1f22c66-daa2-4dd7-8394-ceab983464e2-entrypoint\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.711631 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d1f22c66-daa2-4dd7-8394-ceab983464e2-datadir\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.711708 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1f22c66-daa2-4dd7-8394-ceab983464e2-trusted-ca\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.711761 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d1f22c66-daa2-4dd7-8394-ceab983464e2-config-openshift-service-cacrt\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.711811 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f22c66-daa2-4dd7-8394-ceab983464e2-config\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.711851 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d1f22c66-daa2-4dd7-8394-ceab983464e2-sa-token\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.711908 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d1f22c66-daa2-4dd7-8394-ceab983464e2-tmp\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.814013 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z2w7\" (UniqueName: \"kubernetes.io/projected/d1f22c66-daa2-4dd7-8394-ceab983464e2-kube-api-access-5z2w7\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.814598 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d1f22c66-daa2-4dd7-8394-ceab983464e2-collector-syslog-receiver\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.814852 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d1f22c66-daa2-4dd7-8394-ceab983464e2-metrics\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.815115 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d1f22c66-daa2-4dd7-8394-ceab983464e2-collector-token\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.815474 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d1f22c66-daa2-4dd7-8394-ceab983464e2-entrypoint\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.815721 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d1f22c66-daa2-4dd7-8394-ceab983464e2-datadir\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.815937 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1f22c66-daa2-4dd7-8394-ceab983464e2-trusted-ca\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.816144 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d1f22c66-daa2-4dd7-8394-ceab983464e2-config-openshift-service-cacrt\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.816371 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f22c66-daa2-4dd7-8394-ceab983464e2-config\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.816588 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d1f22c66-daa2-4dd7-8394-ceab983464e2-sa-token\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.816833 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d1f22c66-daa2-4dd7-8394-ceab983464e2-tmp\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.817869 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1f22c66-daa2-4dd7-8394-ceab983464e2-trusted-ca\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.817944 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d1f22c66-daa2-4dd7-8394-ceab983464e2-datadir\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.816589 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d1f22c66-daa2-4dd7-8394-ceab983464e2-entrypoint\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.818644 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f22c66-daa2-4dd7-8394-ceab983464e2-config\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.819574 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d1f22c66-daa2-4dd7-8394-ceab983464e2-config-openshift-service-cacrt\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.820574 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d1f22c66-daa2-4dd7-8394-ceab983464e2-tmp\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.821057 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d1f22c66-daa2-4dd7-8394-ceab983464e2-collector-syslog-receiver\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.823218 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d1f22c66-daa2-4dd7-8394-ceab983464e2-collector-token\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.824874 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d1f22c66-daa2-4dd7-8394-ceab983464e2-metrics\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.840500 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z2w7\" (UniqueName: \"kubernetes.io/projected/d1f22c66-daa2-4dd7-8394-ceab983464e2-kube-api-access-5z2w7\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.846138 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d1f22c66-daa2-4dd7-8394-ceab983464e2-sa-token\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.950417 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-f6llv" Jan 30 21:29:45 crc kubenswrapper[4751]: I0130 21:29:45.427892 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-f6llv"] Jan 30 21:29:45 crc kubenswrapper[4751]: W0130 21:29:45.450532 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1f22c66_daa2_4dd7_8394_ceab983464e2.slice/crio-827bb56764fc7da48376796d3fb2ffc010f527570d6089a054ae159c6450c370 WatchSource:0}: Error finding container 827bb56764fc7da48376796d3fb2ffc010f527570d6089a054ae159c6450c370: Status 404 returned error can't find the container with id 827bb56764fc7da48376796d3fb2ffc010f527570d6089a054ae159c6450c370 Jan 30 21:29:45 crc kubenswrapper[4751]: I0130 21:29:45.574211 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-f6llv" event={"ID":"d1f22c66-daa2-4dd7-8394-ceab983464e2","Type":"ContainerStarted","Data":"827bb56764fc7da48376796d3fb2ffc010f527570d6089a054ae159c6450c370"} Jan 30 21:29:45 crc kubenswrapper[4751]: I0130 21:29:45.991475 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6746a54-2590-4b31-99ef-332ede51c384" path="/var/lib/kubelet/pods/b6746a54-2590-4b31-99ef-332ede51c384/volumes" Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.188170 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8clf6"] Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.189873 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.195153 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8clf6"] Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.261932 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5cceb1-605d-4f28-a5ed-a70292156bf4-catalog-content\") pod \"redhat-marketplace-8clf6\" (UID: \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\") " pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.261993 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5cceb1-605d-4f28-a5ed-a70292156bf4-utilities\") pod \"redhat-marketplace-8clf6\" (UID: \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\") " pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.262023 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dfn9\" (UniqueName: \"kubernetes.io/projected/cc5cceb1-605d-4f28-a5ed-a70292156bf4-kube-api-access-9dfn9\") pod \"redhat-marketplace-8clf6\" (UID: \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\") " pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.363562 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5cceb1-605d-4f28-a5ed-a70292156bf4-catalog-content\") pod \"redhat-marketplace-8clf6\" (UID: \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\") " pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.363689 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5cceb1-605d-4f28-a5ed-a70292156bf4-utilities\") pod \"redhat-marketplace-8clf6\" (UID: \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\") " pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.363778 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dfn9\" (UniqueName: \"kubernetes.io/projected/cc5cceb1-605d-4f28-a5ed-a70292156bf4-kube-api-access-9dfn9\") pod \"redhat-marketplace-8clf6\" (UID: \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\") " pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.364401 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5cceb1-605d-4f28-a5ed-a70292156bf4-catalog-content\") pod \"redhat-marketplace-8clf6\" (UID: \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\") " pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.364779 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5cceb1-605d-4f28-a5ed-a70292156bf4-utilities\") pod \"redhat-marketplace-8clf6\" (UID: \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\") " pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.389088 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dfn9\" (UniqueName: \"kubernetes.io/projected/cc5cceb1-605d-4f28-a5ed-a70292156bf4-kube-api-access-9dfn9\") pod \"redhat-marketplace-8clf6\" (UID: \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\") " pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.511515 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.634572 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-f6llv" event={"ID":"d1f22c66-daa2-4dd7-8394-ceab983464e2","Type":"ContainerStarted","Data":"8afd54fb03a09aa06a3be18abbe11b4261679a32c0b42df1754fb5bbbe369bc6"} Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.667532 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-f6llv" podStartSLOduration=2.1333414729999998 podStartE2EDuration="8.667511673s" podCreationTimestamp="2026-01-30 21:29:44 +0000 UTC" firstStartedPulling="2026-01-30 21:29:45.452263941 +0000 UTC m=+924.198086590" lastFinishedPulling="2026-01-30 21:29:51.986434141 +0000 UTC m=+930.732256790" observedRunningTime="2026-01-30 21:29:52.656246541 +0000 UTC m=+931.402069190" watchObservedRunningTime="2026-01-30 21:29:52.667511673 +0000 UTC m=+931.413334322" Jan 30 21:29:53 crc kubenswrapper[4751]: I0130 21:29:53.027746 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8clf6"] Jan 30 21:29:53 crc kubenswrapper[4751]: I0130 21:29:53.643253 4751 generic.go:334] "Generic (PLEG): container finished" podID="cc5cceb1-605d-4f28-a5ed-a70292156bf4" containerID="036af09cf7aab0d70c3f63156246b04080c96a1dc0b0a08ef2d673f3a545b076" exitCode=0 Jan 30 21:29:53 crc kubenswrapper[4751]: I0130 21:29:53.643351 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8clf6" event={"ID":"cc5cceb1-605d-4f28-a5ed-a70292156bf4","Type":"ContainerDied","Data":"036af09cf7aab0d70c3f63156246b04080c96a1dc0b0a08ef2d673f3a545b076"} Jan 30 21:29:53 crc kubenswrapper[4751]: I0130 21:29:53.643692 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8clf6" event={"ID":"cc5cceb1-605d-4f28-a5ed-a70292156bf4","Type":"ContainerStarted","Data":"e11d99f981d4367e383246d739931587292a66f139c6473cdf0a79c78b681de8"} Jan 30 21:29:54 crc kubenswrapper[4751]: I0130 21:29:54.127186 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:29:54 crc kubenswrapper[4751]: I0130 21:29:54.127262 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:29:54 crc kubenswrapper[4751]: I0130 21:29:54.127311 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:29:54 crc kubenswrapper[4751]: I0130 21:29:54.128097 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ad350159473538b7294a1cb17b3c91bed6ccae12ecd005a2dc1c208ac650225b"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:29:54 crc kubenswrapper[4751]: I0130 21:29:54.128169 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://ad350159473538b7294a1cb17b3c91bed6ccae12ecd005a2dc1c208ac650225b" gracePeriod=600 Jan 30 21:29:54 crc kubenswrapper[4751]: I0130 21:29:54.652106 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="ad350159473538b7294a1cb17b3c91bed6ccae12ecd005a2dc1c208ac650225b" exitCode=0 Jan 30 21:29:54 crc kubenswrapper[4751]: I0130 21:29:54.652187 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"ad350159473538b7294a1cb17b3c91bed6ccae12ecd005a2dc1c208ac650225b"} Jan 30 21:29:54 crc kubenswrapper[4751]: I0130 21:29:54.653208 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"4084bd2e19ec539ac0bc075f3b6a34007de80a7e632827590212d241d8cb0234"} Jan 30 21:29:54 crc kubenswrapper[4751]: I0130 21:29:54.653247 4751 scope.go:117] "RemoveContainer" containerID="a610754d75a118a60637a1e554575fc5a5a243d54c20205f4fedf2c00e804266" Jan 30 21:29:54 crc kubenswrapper[4751]: I0130 21:29:54.656257 4751 generic.go:334] "Generic (PLEG): container finished" podID="cc5cceb1-605d-4f28-a5ed-a70292156bf4" containerID="48cf0080566ee6f63f70d0c98cdec8939b9b7b696d58ddd2f3c43693635ec675" exitCode=0 Jan 30 21:29:54 crc kubenswrapper[4751]: I0130 21:29:54.656308 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8clf6" event={"ID":"cc5cceb1-605d-4f28-a5ed-a70292156bf4","Type":"ContainerDied","Data":"48cf0080566ee6f63f70d0c98cdec8939b9b7b696d58ddd2f3c43693635ec675"} Jan 30 21:29:55 crc kubenswrapper[4751]: I0130 21:29:55.666938 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8clf6" event={"ID":"cc5cceb1-605d-4f28-a5ed-a70292156bf4","Type":"ContainerStarted","Data":"d7cd20743070613bbd66e5062d4425c242e25d1a9f894acd5a527d8f0b0d83c3"} Jan 30 21:29:55 crc kubenswrapper[4751]: I0130 21:29:55.685972 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8clf6" podStartSLOduration=2.315184462 podStartE2EDuration="3.685955217s" podCreationTimestamp="2026-01-30 21:29:52 +0000 UTC" firstStartedPulling="2026-01-30 21:29:53.646100492 +0000 UTC m=+932.391923141" lastFinishedPulling="2026-01-30 21:29:55.016871247 +0000 UTC m=+933.762693896" observedRunningTime="2026-01-30 21:29:55.683416729 +0000 UTC m=+934.429239398" watchObservedRunningTime="2026-01-30 21:29:55.685955217 +0000 UTC m=+934.431777876" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.181561 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8"] Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.183053 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.185293 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.186159 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.195675 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8"] Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.307013 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-config-volume\") pod \"collect-profiles-29496810-rblx8\" (UID: \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.307067 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-secret-volume\") pod \"collect-profiles-29496810-rblx8\" (UID: \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.307250 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwqnl\" (UniqueName: \"kubernetes.io/projected/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-kube-api-access-nwqnl\") pod \"collect-profiles-29496810-rblx8\" (UID: \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.409063 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwqnl\" (UniqueName: \"kubernetes.io/projected/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-kube-api-access-nwqnl\") pod \"collect-profiles-29496810-rblx8\" (UID: \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.409162 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-config-volume\") pod \"collect-profiles-29496810-rblx8\" (UID: \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.409192 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-secret-volume\") pod \"collect-profiles-29496810-rblx8\" (UID: \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.410495 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-config-volume\") pod \"collect-profiles-29496810-rblx8\" (UID: \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.417476 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-secret-volume\") pod \"collect-profiles-29496810-rblx8\" (UID: \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.432925 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwqnl\" (UniqueName: \"kubernetes.io/projected/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-kube-api-access-nwqnl\") pod \"collect-profiles-29496810-rblx8\" (UID: \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.507900 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.978941 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8"] Jan 30 21:30:01 crc kubenswrapper[4751]: I0130 21:30:01.711382 4751 generic.go:334] "Generic (PLEG): container finished" podID="44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6" containerID="664bdfce98a1e87d41664b73e411b35da3c4e69be04f5631e859fc26af9552e4" exitCode=0 Jan 30 21:30:01 crc kubenswrapper[4751]: I0130 21:30:01.711483 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" event={"ID":"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6","Type":"ContainerDied","Data":"664bdfce98a1e87d41664b73e411b35da3c4e69be04f5631e859fc26af9552e4"} Jan 30 21:30:01 crc kubenswrapper[4751]: I0130 21:30:01.711667 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" event={"ID":"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6","Type":"ContainerStarted","Data":"aae75191e239c3a86e97cbe6355a3766cbfeb781c33b0b00aeccf9b73c16953c"} Jan 30 21:30:02 crc kubenswrapper[4751]: I0130 21:30:02.511629 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:30:02 crc kubenswrapper[4751]: I0130 21:30:02.511682 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:30:02 crc kubenswrapper[4751]: I0130 21:30:02.565903 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:30:02 crc kubenswrapper[4751]: I0130 21:30:02.766553 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:30:02 crc kubenswrapper[4751]: I0130 21:30:02.825973 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8clf6"] Jan 30 21:30:02 crc kubenswrapper[4751]: I0130 21:30:02.991860 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" Jan 30 21:30:03 crc kubenswrapper[4751]: I0130 21:30:03.154108 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-config-volume\") pod \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\" (UID: \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\") " Jan 30 21:30:03 crc kubenswrapper[4751]: I0130 21:30:03.155155 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-secret-volume\") pod \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\" (UID: \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\") " Jan 30 21:30:03 crc kubenswrapper[4751]: I0130 21:30:03.155194 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwqnl\" (UniqueName: \"kubernetes.io/projected/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-kube-api-access-nwqnl\") pod \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\" (UID: \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\") " Jan 30 21:30:03 crc kubenswrapper[4751]: I0130 21:30:03.156957 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-config-volume" (OuterVolumeSpecName: "config-volume") pod "44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6" (UID: "44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:30:03 crc kubenswrapper[4751]: I0130 21:30:03.157603 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:03 crc kubenswrapper[4751]: I0130 21:30:03.162251 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6" (UID: "44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:30:03 crc kubenswrapper[4751]: I0130 21:30:03.163470 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-kube-api-access-nwqnl" (OuterVolumeSpecName: "kube-api-access-nwqnl") pod "44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6" (UID: "44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6"). InnerVolumeSpecName "kube-api-access-nwqnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:30:03 crc kubenswrapper[4751]: I0130 21:30:03.259255 4751 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:03 crc kubenswrapper[4751]: I0130 21:30:03.259297 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwqnl\" (UniqueName: \"kubernetes.io/projected/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-kube-api-access-nwqnl\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:03 crc kubenswrapper[4751]: I0130 21:30:03.738438 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" event={"ID":"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6","Type":"ContainerDied","Data":"aae75191e239c3a86e97cbe6355a3766cbfeb781c33b0b00aeccf9b73c16953c"} Jan 30 21:30:03 crc kubenswrapper[4751]: I0130 21:30:03.738509 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aae75191e239c3a86e97cbe6355a3766cbfeb781c33b0b00aeccf9b73c16953c" Jan 30 21:30:03 crc kubenswrapper[4751]: I0130 21:30:03.738451 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" Jan 30 21:30:04 crc kubenswrapper[4751]: I0130 21:30:04.749259 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8clf6" podUID="cc5cceb1-605d-4f28-a5ed-a70292156bf4" containerName="registry-server" containerID="cri-o://d7cd20743070613bbd66e5062d4425c242e25d1a9f894acd5a527d8f0b0d83c3" gracePeriod=2 Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.250228 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.421469 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dfn9\" (UniqueName: \"kubernetes.io/projected/cc5cceb1-605d-4f28-a5ed-a70292156bf4-kube-api-access-9dfn9\") pod \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\" (UID: \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\") " Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.421543 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5cceb1-605d-4f28-a5ed-a70292156bf4-catalog-content\") pod \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\" (UID: \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\") " Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.421637 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5cceb1-605d-4f28-a5ed-a70292156bf4-utilities\") pod \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\" (UID: \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\") " Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.423124 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc5cceb1-605d-4f28-a5ed-a70292156bf4-utilities" (OuterVolumeSpecName: "utilities") pod "cc5cceb1-605d-4f28-a5ed-a70292156bf4" (UID: "cc5cceb1-605d-4f28-a5ed-a70292156bf4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.432609 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc5cceb1-605d-4f28-a5ed-a70292156bf4-kube-api-access-9dfn9" (OuterVolumeSpecName: "kube-api-access-9dfn9") pod "cc5cceb1-605d-4f28-a5ed-a70292156bf4" (UID: "cc5cceb1-605d-4f28-a5ed-a70292156bf4"). InnerVolumeSpecName "kube-api-access-9dfn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.461431 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc5cceb1-605d-4f28-a5ed-a70292156bf4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc5cceb1-605d-4f28-a5ed-a70292156bf4" (UID: "cc5cceb1-605d-4f28-a5ed-a70292156bf4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.524740 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5cceb1-605d-4f28-a5ed-a70292156bf4-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.524811 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dfn9\" (UniqueName: \"kubernetes.io/projected/cc5cceb1-605d-4f28-a5ed-a70292156bf4-kube-api-access-9dfn9\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.524842 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5cceb1-605d-4f28-a5ed-a70292156bf4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.761695 4751 generic.go:334] "Generic (PLEG): container finished" podID="cc5cceb1-605d-4f28-a5ed-a70292156bf4" containerID="d7cd20743070613bbd66e5062d4425c242e25d1a9f894acd5a527d8f0b0d83c3" exitCode=0 Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.761750 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8clf6" event={"ID":"cc5cceb1-605d-4f28-a5ed-a70292156bf4","Type":"ContainerDied","Data":"d7cd20743070613bbd66e5062d4425c242e25d1a9f894acd5a527d8f0b0d83c3"} Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.761819 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8clf6" event={"ID":"cc5cceb1-605d-4f28-a5ed-a70292156bf4","Type":"ContainerDied","Data":"e11d99f981d4367e383246d739931587292a66f139c6473cdf0a79c78b681de8"} Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.761833 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.761850 4751 scope.go:117] "RemoveContainer" containerID="d7cd20743070613bbd66e5062d4425c242e25d1a9f894acd5a527d8f0b0d83c3" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.789401 4751 scope.go:117] "RemoveContainer" containerID="48cf0080566ee6f63f70d0c98cdec8939b9b7b696d58ddd2f3c43693635ec675" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.820885 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8clf6"] Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.831268 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8clf6"] Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.849611 4751 scope.go:117] "RemoveContainer" containerID="036af09cf7aab0d70c3f63156246b04080c96a1dc0b0a08ef2d673f3a545b076" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.866829 4751 scope.go:117] "RemoveContainer" containerID="d7cd20743070613bbd66e5062d4425c242e25d1a9f894acd5a527d8f0b0d83c3" Jan 30 21:30:05 crc kubenswrapper[4751]: E0130 21:30:05.867300 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7cd20743070613bbd66e5062d4425c242e25d1a9f894acd5a527d8f0b0d83c3\": container with ID starting with d7cd20743070613bbd66e5062d4425c242e25d1a9f894acd5a527d8f0b0d83c3 not found: ID does not exist" containerID="d7cd20743070613bbd66e5062d4425c242e25d1a9f894acd5a527d8f0b0d83c3" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.867349 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7cd20743070613bbd66e5062d4425c242e25d1a9f894acd5a527d8f0b0d83c3"} err="failed to get container status \"d7cd20743070613bbd66e5062d4425c242e25d1a9f894acd5a527d8f0b0d83c3\": rpc error: code = NotFound desc = could not find container \"d7cd20743070613bbd66e5062d4425c242e25d1a9f894acd5a527d8f0b0d83c3\": container with ID starting with d7cd20743070613bbd66e5062d4425c242e25d1a9f894acd5a527d8f0b0d83c3 not found: ID does not exist" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.867374 4751 scope.go:117] "RemoveContainer" containerID="48cf0080566ee6f63f70d0c98cdec8939b9b7b696d58ddd2f3c43693635ec675" Jan 30 21:30:05 crc kubenswrapper[4751]: E0130 21:30:05.867660 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48cf0080566ee6f63f70d0c98cdec8939b9b7b696d58ddd2f3c43693635ec675\": container with ID starting with 48cf0080566ee6f63f70d0c98cdec8939b9b7b696d58ddd2f3c43693635ec675 not found: ID does not exist" containerID="48cf0080566ee6f63f70d0c98cdec8939b9b7b696d58ddd2f3c43693635ec675" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.867679 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48cf0080566ee6f63f70d0c98cdec8939b9b7b696d58ddd2f3c43693635ec675"} err="failed to get container status \"48cf0080566ee6f63f70d0c98cdec8939b9b7b696d58ddd2f3c43693635ec675\": rpc error: code = NotFound desc = could not find container \"48cf0080566ee6f63f70d0c98cdec8939b9b7b696d58ddd2f3c43693635ec675\": container with ID starting with 48cf0080566ee6f63f70d0c98cdec8939b9b7b696d58ddd2f3c43693635ec675 not found: ID does not exist" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.867690 4751 scope.go:117] "RemoveContainer" containerID="036af09cf7aab0d70c3f63156246b04080c96a1dc0b0a08ef2d673f3a545b076" Jan 30 21:30:05 crc kubenswrapper[4751]: E0130 21:30:05.867921 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"036af09cf7aab0d70c3f63156246b04080c96a1dc0b0a08ef2d673f3a545b076\": container with ID starting with 036af09cf7aab0d70c3f63156246b04080c96a1dc0b0a08ef2d673f3a545b076 not found: ID does not exist" containerID="036af09cf7aab0d70c3f63156246b04080c96a1dc0b0a08ef2d673f3a545b076" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.867943 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"036af09cf7aab0d70c3f63156246b04080c96a1dc0b0a08ef2d673f3a545b076"} err="failed to get container status \"036af09cf7aab0d70c3f63156246b04080c96a1dc0b0a08ef2d673f3a545b076\": rpc error: code = NotFound desc = could not find container \"036af09cf7aab0d70c3f63156246b04080c96a1dc0b0a08ef2d673f3a545b076\": container with ID starting with 036af09cf7aab0d70c3f63156246b04080c96a1dc0b0a08ef2d673f3a545b076 not found: ID does not exist" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.988180 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc5cceb1-605d-4f28-a5ed-a70292156bf4" path="/var/lib/kubelet/pods/cc5cceb1-605d-4f28-a5ed-a70292156bf4/volumes" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.629966 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cqsvg"] Jan 30 21:30:08 crc kubenswrapper[4751]: E0130 21:30:08.631644 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5cceb1-605d-4f28-a5ed-a70292156bf4" containerName="registry-server" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.631682 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5cceb1-605d-4f28-a5ed-a70292156bf4" containerName="registry-server" Jan 30 21:30:08 crc kubenswrapper[4751]: E0130 21:30:08.631768 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6" containerName="collect-profiles" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.631818 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6" containerName="collect-profiles" Jan 30 21:30:08 crc kubenswrapper[4751]: E0130 21:30:08.631861 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5cceb1-605d-4f28-a5ed-a70292156bf4" containerName="extract-utilities" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.631910 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5cceb1-605d-4f28-a5ed-a70292156bf4" containerName="extract-utilities" Jan 30 21:30:08 crc kubenswrapper[4751]: E0130 21:30:08.631946 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5cceb1-605d-4f28-a5ed-a70292156bf4" containerName="extract-content" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.631957 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5cceb1-605d-4f28-a5ed-a70292156bf4" containerName="extract-content" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.633074 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6" containerName="collect-profiles" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.633160 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc5cceb1-605d-4f28-a5ed-a70292156bf4" containerName="registry-server" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.637317 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.641784 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cqsvg"] Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.680782 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-catalog-content\") pod \"certified-operators-cqsvg\" (UID: \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\") " pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.680816 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-utilities\") pod \"certified-operators-cqsvg\" (UID: \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\") " pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.680976 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7gzn\" (UniqueName: \"kubernetes.io/projected/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-kube-api-access-d7gzn\") pod \"certified-operators-cqsvg\" (UID: \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\") " pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.781674 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7gzn\" (UniqueName: \"kubernetes.io/projected/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-kube-api-access-d7gzn\") pod \"certified-operators-cqsvg\" (UID: \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\") " pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.781820 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-catalog-content\") pod \"certified-operators-cqsvg\" (UID: \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\") " pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.781854 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-utilities\") pod \"certified-operators-cqsvg\" (UID: \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\") " pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.782578 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-utilities\") pod \"certified-operators-cqsvg\" (UID: \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\") " pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.782580 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-catalog-content\") pod \"certified-operators-cqsvg\" (UID: \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\") " pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.811622 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7gzn\" (UniqueName: \"kubernetes.io/projected/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-kube-api-access-d7gzn\") pod \"certified-operators-cqsvg\" (UID: \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\") " pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.984449 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:09 crc kubenswrapper[4751]: I0130 21:30:09.487774 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cqsvg"] Jan 30 21:30:09 crc kubenswrapper[4751]: W0130 21:30:09.497246 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod658cd3aa_28fd_4fdd_bbce_ab07effcdc0b.slice/crio-fe5fcfb1fc44ccdf6c4a741d4d16f63708aa615ae6f06f1cd0921da957c348dc WatchSource:0}: Error finding container fe5fcfb1fc44ccdf6c4a741d4d16f63708aa615ae6f06f1cd0921da957c348dc: Status 404 returned error can't find the container with id fe5fcfb1fc44ccdf6c4a741d4d16f63708aa615ae6f06f1cd0921da957c348dc Jan 30 21:30:09 crc kubenswrapper[4751]: I0130 21:30:09.798736 4751 generic.go:334] "Generic (PLEG): container finished" podID="658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" containerID="ef163a3afba5ac1eb335431aa2395ea6c9a037884a732e5ca52e5147972cf403" exitCode=0 Jan 30 21:30:09 crc kubenswrapper[4751]: I0130 21:30:09.799003 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqsvg" event={"ID":"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b","Type":"ContainerDied","Data":"ef163a3afba5ac1eb335431aa2395ea6c9a037884a732e5ca52e5147972cf403"} Jan 30 21:30:09 crc kubenswrapper[4751]: I0130 21:30:09.799146 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqsvg" event={"ID":"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b","Type":"ContainerStarted","Data":"fe5fcfb1fc44ccdf6c4a741d4d16f63708aa615ae6f06f1cd0921da957c348dc"} Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.199109 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-82fwr"] Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.201144 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-82fwr" Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.228169 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-82fwr"] Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.377772 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f294d30-8a47-4f81-930d-3c0bbf564a2e-utilities\") pod \"community-operators-82fwr\" (UID: \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\") " pod="openshift-marketplace/community-operators-82fwr" Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.377912 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f294d30-8a47-4f81-930d-3c0bbf564a2e-catalog-content\") pod \"community-operators-82fwr\" (UID: \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\") " pod="openshift-marketplace/community-operators-82fwr" Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.377967 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksmst\" (UniqueName: \"kubernetes.io/projected/3f294d30-8a47-4f81-930d-3c0bbf564a2e-kube-api-access-ksmst\") pod \"community-operators-82fwr\" (UID: \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\") " pod="openshift-marketplace/community-operators-82fwr" Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.479551 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f294d30-8a47-4f81-930d-3c0bbf564a2e-utilities\") pod \"community-operators-82fwr\" (UID: \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\") " pod="openshift-marketplace/community-operators-82fwr" Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.479813 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f294d30-8a47-4f81-930d-3c0bbf564a2e-catalog-content\") pod \"community-operators-82fwr\" (UID: \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\") " pod="openshift-marketplace/community-operators-82fwr" Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.479859 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksmst\" (UniqueName: \"kubernetes.io/projected/3f294d30-8a47-4f81-930d-3c0bbf564a2e-kube-api-access-ksmst\") pod \"community-operators-82fwr\" (UID: \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\") " pod="openshift-marketplace/community-operators-82fwr" Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.480157 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f294d30-8a47-4f81-930d-3c0bbf564a2e-utilities\") pod \"community-operators-82fwr\" (UID: \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\") " pod="openshift-marketplace/community-operators-82fwr" Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.480213 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f294d30-8a47-4f81-930d-3c0bbf564a2e-catalog-content\") pod \"community-operators-82fwr\" (UID: \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\") " pod="openshift-marketplace/community-operators-82fwr" Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.512866 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksmst\" (UniqueName: \"kubernetes.io/projected/3f294d30-8a47-4f81-930d-3c0bbf564a2e-kube-api-access-ksmst\") pod \"community-operators-82fwr\" (UID: \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\") " pod="openshift-marketplace/community-operators-82fwr" Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.572700 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-82fwr" Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.850853 4751 generic.go:334] "Generic (PLEG): container finished" podID="658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" containerID="aed57bf3af8a9be4354110c686278989c4d522de65d68ab7801998493854f3c7" exitCode=0 Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.852308 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqsvg" event={"ID":"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b","Type":"ContainerDied","Data":"aed57bf3af8a9be4354110c686278989c4d522de65d68ab7801998493854f3c7"} Jan 30 21:30:15 crc kubenswrapper[4751]: I0130 21:30:15.067991 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-82fwr"] Jan 30 21:30:15 crc kubenswrapper[4751]: W0130 21:30:15.084463 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f294d30_8a47_4f81_930d_3c0bbf564a2e.slice/crio-113c2bb9a9bf499f765a898393b70385f7f56309b5e53986a1cbd21e2ebfc4a9 WatchSource:0}: Error finding container 113c2bb9a9bf499f765a898393b70385f7f56309b5e53986a1cbd21e2ebfc4a9: Status 404 returned error can't find the container with id 113c2bb9a9bf499f765a898393b70385f7f56309b5e53986a1cbd21e2ebfc4a9 Jan 30 21:30:15 crc kubenswrapper[4751]: I0130 21:30:15.863674 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqsvg" event={"ID":"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b","Type":"ContainerStarted","Data":"14dc0debd4e467cc635d37215c68233957a35c1f4ea5e8648c5f3af34e750078"} Jan 30 21:30:15 crc kubenswrapper[4751]: I0130 21:30:15.866415 4751 generic.go:334] "Generic (PLEG): container finished" podID="3f294d30-8a47-4f81-930d-3c0bbf564a2e" containerID="0d322c5e3834de44a2bc28394d54b57ce1edfd2b2f61ffa2b8da3ccaad796701" exitCode=0 Jan 30 21:30:15 crc kubenswrapper[4751]: I0130 21:30:15.866461 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82fwr" event={"ID":"3f294d30-8a47-4f81-930d-3c0bbf564a2e","Type":"ContainerDied","Data":"0d322c5e3834de44a2bc28394d54b57ce1edfd2b2f61ffa2b8da3ccaad796701"} Jan 30 21:30:15 crc kubenswrapper[4751]: I0130 21:30:15.866528 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82fwr" event={"ID":"3f294d30-8a47-4f81-930d-3c0bbf564a2e","Type":"ContainerStarted","Data":"113c2bb9a9bf499f765a898393b70385f7f56309b5e53986a1cbd21e2ebfc4a9"} Jan 30 21:30:15 crc kubenswrapper[4751]: I0130 21:30:15.883519 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cqsvg" podStartSLOduration=2.416073276 podStartE2EDuration="7.883493064s" podCreationTimestamp="2026-01-30 21:30:08 +0000 UTC" firstStartedPulling="2026-01-30 21:30:09.802017172 +0000 UTC m=+948.547839851" lastFinishedPulling="2026-01-30 21:30:15.26943699 +0000 UTC m=+954.015259639" observedRunningTime="2026-01-30 21:30:15.881755188 +0000 UTC m=+954.627577877" watchObservedRunningTime="2026-01-30 21:30:15.883493064 +0000 UTC m=+954.629315763" Jan 30 21:30:18 crc kubenswrapper[4751]: I0130 21:30:18.985359 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:18 crc kubenswrapper[4751]: I0130 21:30:18.985886 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:19 crc kubenswrapper[4751]: I0130 21:30:19.065779 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:20 crc kubenswrapper[4751]: I0130 21:30:20.918699 4751 generic.go:334] "Generic (PLEG): container finished" podID="3f294d30-8a47-4f81-930d-3c0bbf564a2e" containerID="b50541ad3e512cc4dea328b45886bb9c17662fb39260a2cbf961ba2a0be6ff91" exitCode=0 Jan 30 21:30:20 crc kubenswrapper[4751]: I0130 21:30:20.918802 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82fwr" event={"ID":"3f294d30-8a47-4f81-930d-3c0bbf564a2e","Type":"ContainerDied","Data":"b50541ad3e512cc4dea328b45886bb9c17662fb39260a2cbf961ba2a0be6ff91"} Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.463387 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.470792 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.476919 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.483699 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.489285 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.496395 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.500175 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cqsvg"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.500414 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cqsvg" podUID="658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" containerName="registry-server" containerID="cri-o://14dc0debd4e467cc635d37215c68233957a35c1f4ea5e8648c5f3af34e750078" gracePeriod=30 Jan 30 21:30:21 crc kubenswrapper[4751]: E0130 21:30:21.511781 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="14dc0debd4e467cc635d37215c68233957a35c1f4ea5e8648c5f3af34e750078" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 21:30:21 crc kubenswrapper[4751]: E0130 21:30:21.517819 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="14dc0debd4e467cc635d37215c68233957a35c1f4ea5e8648c5f3af34e750078" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 21:30:21 crc kubenswrapper[4751]: E0130 21:30:21.521854 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 14dc0debd4e467cc635d37215c68233957a35c1f4ea5e8648c5f3af34e750078 is running failed: container process not found" containerID="14dc0debd4e467cc635d37215c68233957a35c1f4ea5e8648c5f3af34e750078" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 21:30:21 crc kubenswrapper[4751]: E0130 21:30:21.522112 4751 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 14dc0debd4e467cc635d37215c68233957a35c1f4ea5e8648c5f3af34e750078 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-cqsvg" podUID="658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" containerName="registry-server" Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.528284 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-twcnd"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.529020 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-twcnd" podUID="ac49c6a1-fa74-49f3-ba94-c5a469df4a93" containerName="registry-server" containerID="cri-o://da576de8f1c9a0effcc2ee958957d7e00c5cf40151114d08518c5b0c29f0fc29" gracePeriod=30 Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.545272 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-82fwr"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.559498 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bd2xs"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.559801 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bd2xs" podUID="a791b2a3-aead-4130-bdfa-e219f2d47593" containerName="registry-server" containerID="cri-o://960f258866c4433eda726c04fdab80b057c22c0920935676513c71ebdb592216" gracePeriod=30 Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.570237 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-76rml"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.570545 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-76rml" podUID="cdcb33b0-97a6-4ded-96b6-1c5bd9053977" containerName="marketplace-operator" containerID="cri-o://6555bc08329fa2ff543d4810ec47f9a72956f19cbf66209a9749cc91438e7744" gracePeriod=30 Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.576450 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-btf57"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.576730 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-btf57" podUID="6a24b1f1-0656-41ef-826d-c6c40f96b470" containerName="registry-server" containerID="cri-o://7b0cb114f2b94c0af64389530dc0e77b4ef4178db18be6009544673f334a8088" gracePeriod=30 Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.584367 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s9tfl"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.585304 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.591113 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p4hxc"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.591449 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p4hxc" podUID="448ce159-6181-433b-a28a-d00b9240b5af" containerName="registry-server" containerID="cri-o://a6b343dd72b5235871a55e1a2c2def12bf611b5a0982df3c0c87934e222e51ce" gracePeriod=30 Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.596649 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s9tfl"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.700390 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7804f857-fb14-4305-97cc-c966621a55b2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-s9tfl\" (UID: \"7804f857-fb14-4305-97cc-c966621a55b2\") " pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.700461 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7804f857-fb14-4305-97cc-c966621a55b2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-s9tfl\" (UID: \"7804f857-fb14-4305-97cc-c966621a55b2\") " pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.700619 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7666v\" (UniqueName: \"kubernetes.io/projected/7804f857-fb14-4305-97cc-c966621a55b2-kube-api-access-7666v\") pod \"marketplace-operator-79b997595-s9tfl\" (UID: \"7804f857-fb14-4305-97cc-c966621a55b2\") " pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.802735 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7666v\" (UniqueName: \"kubernetes.io/projected/7804f857-fb14-4305-97cc-c966621a55b2-kube-api-access-7666v\") pod \"marketplace-operator-79b997595-s9tfl\" (UID: \"7804f857-fb14-4305-97cc-c966621a55b2\") " pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.802854 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7804f857-fb14-4305-97cc-c966621a55b2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-s9tfl\" (UID: \"7804f857-fb14-4305-97cc-c966621a55b2\") " pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.802951 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7804f857-fb14-4305-97cc-c966621a55b2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-s9tfl\" (UID: \"7804f857-fb14-4305-97cc-c966621a55b2\") " pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.807490 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7804f857-fb14-4305-97cc-c966621a55b2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-s9tfl\" (UID: \"7804f857-fb14-4305-97cc-c966621a55b2\") " pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.812931 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7804f857-fb14-4305-97cc-c966621a55b2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-s9tfl\" (UID: \"7804f857-fb14-4305-97cc-c966621a55b2\") " pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.825157 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7666v\" (UniqueName: \"kubernetes.io/projected/7804f857-fb14-4305-97cc-c966621a55b2-kube-api-access-7666v\") pod \"marketplace-operator-79b997595-s9tfl\" (UID: \"7804f857-fb14-4305-97cc-c966621a55b2\") " pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.938688 4751 generic.go:334] "Generic (PLEG): container finished" podID="cdcb33b0-97a6-4ded-96b6-1c5bd9053977" containerID="6555bc08329fa2ff543d4810ec47f9a72956f19cbf66209a9749cc91438e7744" exitCode=0 Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.938783 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-76rml" event={"ID":"cdcb33b0-97a6-4ded-96b6-1c5bd9053977","Type":"ContainerDied","Data":"6555bc08329fa2ff543d4810ec47f9a72956f19cbf66209a9749cc91438e7744"} Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.944392 4751 generic.go:334] "Generic (PLEG): container finished" podID="a791b2a3-aead-4130-bdfa-e219f2d47593" containerID="960f258866c4433eda726c04fdab80b057c22c0920935676513c71ebdb592216" exitCode=0 Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.944473 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd2xs" event={"ID":"a791b2a3-aead-4130-bdfa-e219f2d47593","Type":"ContainerDied","Data":"960f258866c4433eda726c04fdab80b057c22c0920935676513c71ebdb592216"} Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.946690 4751 generic.go:334] "Generic (PLEG): container finished" podID="658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" containerID="14dc0debd4e467cc635d37215c68233957a35c1f4ea5e8648c5f3af34e750078" exitCode=0 Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.946745 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqsvg" event={"ID":"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b","Type":"ContainerDied","Data":"14dc0debd4e467cc635d37215c68233957a35c1f4ea5e8648c5f3af34e750078"} Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.949902 4751 generic.go:334] "Generic (PLEG): container finished" podID="ac49c6a1-fa74-49f3-ba94-c5a469df4a93" containerID="da576de8f1c9a0effcc2ee958957d7e00c5cf40151114d08518c5b0c29f0fc29" exitCode=0 Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.949969 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twcnd" event={"ID":"ac49c6a1-fa74-49f3-ba94-c5a469df4a93","Type":"ContainerDied","Data":"da576de8f1c9a0effcc2ee958957d7e00c5cf40151114d08518c5b0c29f0fc29"} Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.954553 4751 generic.go:334] "Generic (PLEG): container finished" podID="6a24b1f1-0656-41ef-826d-c6c40f96b470" containerID="7b0cb114f2b94c0af64389530dc0e77b4ef4178db18be6009544673f334a8088" exitCode=0 Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.954692 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btf57" event={"ID":"6a24b1f1-0656-41ef-826d-c6c40f96b470","Type":"ContainerDied","Data":"7b0cb114f2b94c0af64389530dc0e77b4ef4178db18be6009544673f334a8088"} Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.957785 4751 generic.go:334] "Generic (PLEG): container finished" podID="448ce159-6181-433b-a28a-d00b9240b5af" containerID="a6b343dd72b5235871a55e1a2c2def12bf611b5a0982df3c0c87934e222e51ce" exitCode=0 Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.957834 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4hxc" event={"ID":"448ce159-6181-433b-a28a-d00b9240b5af","Type":"ContainerDied","Data":"a6b343dd72b5235871a55e1a2c2def12bf611b5a0982df3c0c87934e222e51ce"} Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.996465 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cbca202-59f0-4772-a82c-8c448cbc4c70" path="/var/lib/kubelet/pods/1cbca202-59f0-4772-a82c-8c448cbc4c70/volumes" Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.998037 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" path="/var/lib/kubelet/pods/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e/volumes" Jan 30 21:30:22 crc kubenswrapper[4751]: I0130 21:30:22.001472 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dedbc66c-13e3-4312-85e6-00d215e5f2ff" path="/var/lib/kubelet/pods/dedbc66c-13e3-4312-85e6-00d215e5f2ff/volumes" Jan 30 21:30:22 crc kubenswrapper[4751]: I0130 21:30:22.079690 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" Jan 30 21:30:22 crc kubenswrapper[4751]: I0130 21:30:22.569665 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s9tfl"] Jan 30 21:30:22 crc kubenswrapper[4751]: W0130 21:30:22.579969 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7804f857_fb14_4305_97cc_c966621a55b2.slice/crio-19ad62459394104724127966dc1efbd3069eca9ab57c628b2cec1833d0051aaf WatchSource:0}: Error finding container 19ad62459394104724127966dc1efbd3069eca9ab57c628b2cec1833d0051aaf: Status 404 returned error can't find the container with id 19ad62459394104724127966dc1efbd3069eca9ab57c628b2cec1833d0051aaf Jan 30 21:30:22 crc kubenswrapper[4751]: I0130 21:30:22.987103 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" event={"ID":"7804f857-fb14-4305-97cc-c966621a55b2","Type":"ContainerStarted","Data":"fd26b0bda021a3fb2c5013c401bf33aa0285fad6011eea6cb8dab5b9f4ad458c"} Jan 30 21:30:22 crc kubenswrapper[4751]: I0130 21:30:22.987145 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" event={"ID":"7804f857-fb14-4305-97cc-c966621a55b2","Type":"ContainerStarted","Data":"19ad62459394104724127966dc1efbd3069eca9ab57c628b2cec1833d0051aaf"} Jan 30 21:30:22 crc kubenswrapper[4751]: I0130 21:30:22.992724 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82fwr" event={"ID":"3f294d30-8a47-4f81-930d-3c0bbf564a2e","Type":"ContainerStarted","Data":"e882b991fe554f5b55ac2c47e1792c2cd21238b11811faa8386858dcb855db88"} Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.220980 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.306620 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.337426 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.343571 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnnj4\" (UniqueName: \"kubernetes.io/projected/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-kube-api-access-gnnj4\") pod \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\" (UID: \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.343647 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-marketplace-trusted-ca\") pod \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\" (UID: \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.343668 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8stj\" (UniqueName: \"kubernetes.io/projected/6a24b1f1-0656-41ef-826d-c6c40f96b470-kube-api-access-z8stj\") pod \"6a24b1f1-0656-41ef-826d-c6c40f96b470\" (UID: \"6a24b1f1-0656-41ef-826d-c6c40f96b470\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.343735 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-marketplace-operator-metrics\") pod \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\" (UID: \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.343763 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-utilities\") pod \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\" (UID: \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.343805 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqk8q\" (UniqueName: \"kubernetes.io/projected/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-kube-api-access-qqk8q\") pod \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\" (UID: \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.343858 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-catalog-content\") pod \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\" (UID: \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.343907 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a24b1f1-0656-41ef-826d-c6c40f96b470-utilities\") pod \"6a24b1f1-0656-41ef-826d-c6c40f96b470\" (UID: \"6a24b1f1-0656-41ef-826d-c6c40f96b470\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.343926 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a24b1f1-0656-41ef-826d-c6c40f96b470-catalog-content\") pod \"6a24b1f1-0656-41ef-826d-c6c40f96b470\" (UID: \"6a24b1f1-0656-41ef-826d-c6c40f96b470\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.345132 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a24b1f1-0656-41ef-826d-c6c40f96b470-utilities" (OuterVolumeSpecName: "utilities") pod "6a24b1f1-0656-41ef-826d-c6c40f96b470" (UID: "6a24b1f1-0656-41ef-826d-c6c40f96b470"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.345790 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "cdcb33b0-97a6-4ded-96b6-1c5bd9053977" (UID: "cdcb33b0-97a6-4ded-96b6-1c5bd9053977"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.346145 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-utilities" (OuterVolumeSpecName: "utilities") pod "ac49c6a1-fa74-49f3-ba94-c5a469df4a93" (UID: "ac49c6a1-fa74-49f3-ba94-c5a469df4a93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.350365 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-kube-api-access-qqk8q" (OuterVolumeSpecName: "kube-api-access-qqk8q") pod "ac49c6a1-fa74-49f3-ba94-c5a469df4a93" (UID: "ac49c6a1-fa74-49f3-ba94-c5a469df4a93"). InnerVolumeSpecName "kube-api-access-qqk8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.351587 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-kube-api-access-gnnj4" (OuterVolumeSpecName: "kube-api-access-gnnj4") pod "cdcb33b0-97a6-4ded-96b6-1c5bd9053977" (UID: "cdcb33b0-97a6-4ded-96b6-1c5bd9053977"). InnerVolumeSpecName "kube-api-access-gnnj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.354737 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "cdcb33b0-97a6-4ded-96b6-1c5bd9053977" (UID: "cdcb33b0-97a6-4ded-96b6-1c5bd9053977"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.359536 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.369119 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.377374 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a24b1f1-0656-41ef-826d-c6c40f96b470-kube-api-access-z8stj" (OuterVolumeSpecName: "kube-api-access-z8stj") pod "6a24b1f1-0656-41ef-826d-c6c40f96b470" (UID: "6a24b1f1-0656-41ef-826d-c6c40f96b470"). InnerVolumeSpecName "kube-api-access-z8stj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.394194 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a24b1f1-0656-41ef-826d-c6c40f96b470-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a24b1f1-0656-41ef-826d-c6c40f96b470" (UID: "6a24b1f1-0656-41ef-826d-c6c40f96b470"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.435868 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac49c6a1-fa74-49f3-ba94-c5a469df4a93" (UID: "ac49c6a1-fa74-49f3-ba94-c5a469df4a93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.473156 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7gzn\" (UniqueName: \"kubernetes.io/projected/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-kube-api-access-d7gzn\") pod \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\" (UID: \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.473734 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d6jg\" (UniqueName: \"kubernetes.io/projected/448ce159-6181-433b-a28a-d00b9240b5af-kube-api-access-8d6jg\") pod \"448ce159-6181-433b-a28a-d00b9240b5af\" (UID: \"448ce159-6181-433b-a28a-d00b9240b5af\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.474107 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-utilities\") pod \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\" (UID: \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.474152 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/448ce159-6181-433b-a28a-d00b9240b5af-catalog-content\") pod \"448ce159-6181-433b-a28a-d00b9240b5af\" (UID: \"448ce159-6181-433b-a28a-d00b9240b5af\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.474210 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/448ce159-6181-433b-a28a-d00b9240b5af-utilities\") pod \"448ce159-6181-433b-a28a-d00b9240b5af\" (UID: \"448ce159-6181-433b-a28a-d00b9240b5af\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.474236 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-catalog-content\") pod \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\" (UID: \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.474608 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a24b1f1-0656-41ef-826d-c6c40f96b470-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.474626 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a24b1f1-0656-41ef-826d-c6c40f96b470-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.474637 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnnj4\" (UniqueName: \"kubernetes.io/projected/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-kube-api-access-gnnj4\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.474649 4751 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.474658 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8stj\" (UniqueName: \"kubernetes.io/projected/6a24b1f1-0656-41ef-826d-c6c40f96b470-kube-api-access-z8stj\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.474667 4751 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.474675 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.474684 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqk8q\" (UniqueName: \"kubernetes.io/projected/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-kube-api-access-qqk8q\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.474692 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.476028 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/448ce159-6181-433b-a28a-d00b9240b5af-utilities" (OuterVolumeSpecName: "utilities") pod "448ce159-6181-433b-a28a-d00b9240b5af" (UID: "448ce159-6181-433b-a28a-d00b9240b5af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.476616 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/448ce159-6181-433b-a28a-d00b9240b5af-kube-api-access-8d6jg" (OuterVolumeSpecName: "kube-api-access-8d6jg") pod "448ce159-6181-433b-a28a-d00b9240b5af" (UID: "448ce159-6181-433b-a28a-d00b9240b5af"). InnerVolumeSpecName "kube-api-access-8d6jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.476766 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-utilities" (OuterVolumeSpecName: "utilities") pod "658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" (UID: "658cd3aa-28fd-4fdd-bbce-ab07effcdc0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.478124 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-kube-api-access-d7gzn" (OuterVolumeSpecName: "kube-api-access-d7gzn") pod "658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" (UID: "658cd3aa-28fd-4fdd-bbce-ab07effcdc0b"). InnerVolumeSpecName "kube-api-access-d7gzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.529119 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" (UID: "658cd3aa-28fd-4fdd-bbce-ab07effcdc0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.575783 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.575814 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/448ce159-6181-433b-a28a-d00b9240b5af-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.575826 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.575837 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7gzn\" (UniqueName: \"kubernetes.io/projected/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-kube-api-access-d7gzn\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.575846 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d6jg\" (UniqueName: \"kubernetes.io/projected/448ce159-6181-433b-a28a-d00b9240b5af-kube-api-access-8d6jg\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.592567 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/448ce159-6181-433b-a28a-d00b9240b5af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "448ce159-6181-433b-a28a-d00b9240b5af" (UID: "448ce159-6181-433b-a28a-d00b9240b5af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.627265 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.676824 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a791b2a3-aead-4130-bdfa-e219f2d47593-catalog-content\") pod \"a791b2a3-aead-4130-bdfa-e219f2d47593\" (UID: \"a791b2a3-aead-4130-bdfa-e219f2d47593\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.676899 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kcw4\" (UniqueName: \"kubernetes.io/projected/a791b2a3-aead-4130-bdfa-e219f2d47593-kube-api-access-7kcw4\") pod \"a791b2a3-aead-4130-bdfa-e219f2d47593\" (UID: \"a791b2a3-aead-4130-bdfa-e219f2d47593\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.677039 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a791b2a3-aead-4130-bdfa-e219f2d47593-utilities\") pod \"a791b2a3-aead-4130-bdfa-e219f2d47593\" (UID: \"a791b2a3-aead-4130-bdfa-e219f2d47593\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.677385 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/448ce159-6181-433b-a28a-d00b9240b5af-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.677971 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a791b2a3-aead-4130-bdfa-e219f2d47593-utilities" (OuterVolumeSpecName: "utilities") pod "a791b2a3-aead-4130-bdfa-e219f2d47593" (UID: "a791b2a3-aead-4130-bdfa-e219f2d47593"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.680125 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a791b2a3-aead-4130-bdfa-e219f2d47593-kube-api-access-7kcw4" (OuterVolumeSpecName: "kube-api-access-7kcw4") pod "a791b2a3-aead-4130-bdfa-e219f2d47593" (UID: "a791b2a3-aead-4130-bdfa-e219f2d47593"). InnerVolumeSpecName "kube-api-access-7kcw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.725216 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a791b2a3-aead-4130-bdfa-e219f2d47593-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a791b2a3-aead-4130-bdfa-e219f2d47593" (UID: "a791b2a3-aead-4130-bdfa-e219f2d47593"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.779295 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a791b2a3-aead-4130-bdfa-e219f2d47593-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.779339 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kcw4\" (UniqueName: \"kubernetes.io/projected/a791b2a3-aead-4130-bdfa-e219f2d47593-kube-api-access-7kcw4\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.779351 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a791b2a3-aead-4130-bdfa-e219f2d47593-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.007227 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.007288 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqsvg" event={"ID":"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b","Type":"ContainerDied","Data":"fe5fcfb1fc44ccdf6c4a741d4d16f63708aa615ae6f06f1cd0921da957c348dc"} Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.007343 4751 scope.go:117] "RemoveContainer" containerID="14dc0debd4e467cc635d37215c68233957a35c1f4ea5e8648c5f3af34e750078" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.012544 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twcnd" event={"ID":"ac49c6a1-fa74-49f3-ba94-c5a469df4a93","Type":"ContainerDied","Data":"1789e26083b6d1b5bcaf1c28e823207fbb2d904374cfefdfc648a991a801687a"} Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.012872 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.020829 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btf57" event={"ID":"6a24b1f1-0656-41ef-826d-c6c40f96b470","Type":"ContainerDied","Data":"e2bcb606a7c5b09ddbf81239b5c38048d9cdd9c5406a2b4fba58566803a5b46b"} Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.020832 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.048435 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4hxc" event={"ID":"448ce159-6181-433b-a28a-d00b9240b5af","Type":"ContainerDied","Data":"73ad198924f3e1eade0a31a7aa5614d242dcbb52f38d6f5161410d402c09b507"} Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.048503 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.055274 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.055372 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd2xs" event={"ID":"a791b2a3-aead-4130-bdfa-e219f2d47593","Type":"ContainerDied","Data":"434124a9abf250cc8847456cd8dfc444504ed2c5f79f019406475bd5e02dd626"} Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.064288 4751 scope.go:117] "RemoveContainer" containerID="aed57bf3af8a9be4354110c686278989c4d522de65d68ab7801998493854f3c7" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.064447 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cqsvg"] Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.065978 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-76rml" event={"ID":"cdcb33b0-97a6-4ded-96b6-1c5bd9053977","Type":"ContainerDied","Data":"644b46c2a2fab923799c15a7a1cf7953e3083f14e5f91214c52144072cb6a7fb"} Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.066061 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.066140 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-82fwr" podUID="3f294d30-8a47-4f81-930d-3c0bbf564a2e" containerName="registry-server" containerID="cri-o://e882b991fe554f5b55ac2c47e1792c2cd21238b11811faa8386858dcb855db88" gracePeriod=30 Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.066481 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.073409 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.075593 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cqsvg"] Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.081812 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-twcnd"] Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.092263 4751 scope.go:117] "RemoveContainer" containerID="ef163a3afba5ac1eb335431aa2395ea6c9a037884a732e5ca52e5147972cf403" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.093920 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-twcnd"] Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.101509 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-btf57"] Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.108141 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-btf57"] Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.117536 4751 scope.go:117] "RemoveContainer" containerID="da576de8f1c9a0effcc2ee958957d7e00c5cf40151114d08518c5b0c29f0fc29" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.118304 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bd2xs"] Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.123678 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bd2xs"] Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.125219 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" podStartSLOduration=3.12520214 podStartE2EDuration="3.12520214s" podCreationTimestamp="2026-01-30 21:30:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:30:24.108650447 +0000 UTC m=+962.854473096" watchObservedRunningTime="2026-01-30 21:30:24.12520214 +0000 UTC m=+962.871024789" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.133369 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p4hxc"] Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.135188 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p4hxc"] Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.139117 4751 scope.go:117] "RemoveContainer" containerID="2c82591f69d50ae83fda7597991bd617784911392dd33cf4f25ec660904d8e1e" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.154918 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-82fwr" podStartSLOduration=4.670945385 podStartE2EDuration="10.154892876s" podCreationTimestamp="2026-01-30 21:30:14 +0000 UTC" firstStartedPulling="2026-01-30 21:30:15.870128727 +0000 UTC m=+954.615951376" lastFinishedPulling="2026-01-30 21:30:21.354076188 +0000 UTC m=+960.099898867" observedRunningTime="2026-01-30 21:30:24.150411616 +0000 UTC m=+962.896234275" watchObservedRunningTime="2026-01-30 21:30:24.154892876 +0000 UTC m=+962.900715535" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.169428 4751 scope.go:117] "RemoveContainer" containerID="bb7468b7d7c0079e6174ab6fab8062e8d6fe8734e0fcc33a217d950b9c4934f4" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.177687 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-76rml"] Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.183840 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-76rml"] Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.246058 4751 scope.go:117] "RemoveContainer" containerID="7b0cb114f2b94c0af64389530dc0e77b4ef4178db18be6009544673f334a8088" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.259462 4751 scope.go:117] "RemoveContainer" containerID="b5fe421c84c49a0fce9c766932eb37dc6ebd8f10a339e43911d566e5bf55820f" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.273784 4751 scope.go:117] "RemoveContainer" containerID="bbac4a5fe3fc00609faebe7f98affa8ef8408a492e79ad4eb2e51f42853acfd7" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.298806 4751 scope.go:117] "RemoveContainer" containerID="a6b343dd72b5235871a55e1a2c2def12bf611b5a0982df3c0c87934e222e51ce" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.317557 4751 scope.go:117] "RemoveContainer" containerID="8c6af28f5b6624524db9760ac1812d7255cfb7aa12f4c630cb631a41508a66c5" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.340521 4751 scope.go:117] "RemoveContainer" containerID="10d985df0a9120f84aedb7a8499aa2e73fa1eb168ac9332a258bbeadbd76d96e" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.357953 4751 scope.go:117] "RemoveContainer" containerID="960f258866c4433eda726c04fdab80b057c22c0920935676513c71ebdb592216" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.376707 4751 scope.go:117] "RemoveContainer" containerID="046ce5e1f77fe5269aa0733495a774c7014a135ba89622c5ae3b5e42a5e2bcc2" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.396150 4751 scope.go:117] "RemoveContainer" containerID="ab432e40787fc0f8c27455630b3e162f083b0e2d799d4a3e7e2a6dfb88ac3b16" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.416915 4751 scope.go:117] "RemoveContainer" containerID="6555bc08329fa2ff543d4810ec47f9a72956f19cbf66209a9749cc91438e7744" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.487352 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-82fwr_3f294d30-8a47-4f81-930d-3c0bbf564a2e/registry-server/0.log" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.488240 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-82fwr" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.594164 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksmst\" (UniqueName: \"kubernetes.io/projected/3f294d30-8a47-4f81-930d-3c0bbf564a2e-kube-api-access-ksmst\") pod \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\" (UID: \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\") " Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.594233 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f294d30-8a47-4f81-930d-3c0bbf564a2e-catalog-content\") pod \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\" (UID: \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\") " Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.594272 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f294d30-8a47-4f81-930d-3c0bbf564a2e-utilities\") pod \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\" (UID: \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\") " Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.595149 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f294d30-8a47-4f81-930d-3c0bbf564a2e-utilities" (OuterVolumeSpecName: "utilities") pod "3f294d30-8a47-4f81-930d-3c0bbf564a2e" (UID: "3f294d30-8a47-4f81-930d-3c0bbf564a2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.600772 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f294d30-8a47-4f81-930d-3c0bbf564a2e-kube-api-access-ksmst" (OuterVolumeSpecName: "kube-api-access-ksmst") pod "3f294d30-8a47-4f81-930d-3c0bbf564a2e" (UID: "3f294d30-8a47-4f81-930d-3c0bbf564a2e"). InnerVolumeSpecName "kube-api-access-ksmst". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.657558 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f294d30-8a47-4f81-930d-3c0bbf564a2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f294d30-8a47-4f81-930d-3c0bbf564a2e" (UID: "3f294d30-8a47-4f81-930d-3c0bbf564a2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.697011 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksmst\" (UniqueName: \"kubernetes.io/projected/3f294d30-8a47-4f81-930d-3c0bbf564a2e-kube-api-access-ksmst\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.697418 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f294d30-8a47-4f81-930d-3c0bbf564a2e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.697433 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f294d30-8a47-4f81-930d-3c0bbf564a2e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.088248 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-82fwr_3f294d30-8a47-4f81-930d-3c0bbf564a2e/registry-server/0.log" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.089971 4751 generic.go:334] "Generic (PLEG): container finished" podID="3f294d30-8a47-4f81-930d-3c0bbf564a2e" containerID="e882b991fe554f5b55ac2c47e1792c2cd21238b11811faa8386858dcb855db88" exitCode=1 Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.090079 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-82fwr" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.090110 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82fwr" event={"ID":"3f294d30-8a47-4f81-930d-3c0bbf564a2e","Type":"ContainerDied","Data":"e882b991fe554f5b55ac2c47e1792c2cd21238b11811faa8386858dcb855db88"} Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.090168 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82fwr" event={"ID":"3f294d30-8a47-4f81-930d-3c0bbf564a2e","Type":"ContainerDied","Data":"113c2bb9a9bf499f765a898393b70385f7f56309b5e53986a1cbd21e2ebfc4a9"} Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.090200 4751 scope.go:117] "RemoveContainer" containerID="e882b991fe554f5b55ac2c47e1792c2cd21238b11811faa8386858dcb855db88" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.121756 4751 scope.go:117] "RemoveContainer" containerID="b50541ad3e512cc4dea328b45886bb9c17662fb39260a2cbf961ba2a0be6ff91" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.155106 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-82fwr"] Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.166444 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-82fwr"] Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.167445 4751 scope.go:117] "RemoveContainer" containerID="0d322c5e3834de44a2bc28394d54b57ce1edfd2b2f61ffa2b8da3ccaad796701" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.199925 4751 scope.go:117] "RemoveContainer" containerID="e882b991fe554f5b55ac2c47e1792c2cd21238b11811faa8386858dcb855db88" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.200377 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e882b991fe554f5b55ac2c47e1792c2cd21238b11811faa8386858dcb855db88\": container with ID starting with e882b991fe554f5b55ac2c47e1792c2cd21238b11811faa8386858dcb855db88 not found: ID does not exist" containerID="e882b991fe554f5b55ac2c47e1792c2cd21238b11811faa8386858dcb855db88" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.200432 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e882b991fe554f5b55ac2c47e1792c2cd21238b11811faa8386858dcb855db88"} err="failed to get container status \"e882b991fe554f5b55ac2c47e1792c2cd21238b11811faa8386858dcb855db88\": rpc error: code = NotFound desc = could not find container \"e882b991fe554f5b55ac2c47e1792c2cd21238b11811faa8386858dcb855db88\": container with ID starting with e882b991fe554f5b55ac2c47e1792c2cd21238b11811faa8386858dcb855db88 not found: ID does not exist" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.200467 4751 scope.go:117] "RemoveContainer" containerID="b50541ad3e512cc4dea328b45886bb9c17662fb39260a2cbf961ba2a0be6ff91" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.200756 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b50541ad3e512cc4dea328b45886bb9c17662fb39260a2cbf961ba2a0be6ff91\": container with ID starting with b50541ad3e512cc4dea328b45886bb9c17662fb39260a2cbf961ba2a0be6ff91 not found: ID does not exist" containerID="b50541ad3e512cc4dea328b45886bb9c17662fb39260a2cbf961ba2a0be6ff91" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.200793 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b50541ad3e512cc4dea328b45886bb9c17662fb39260a2cbf961ba2a0be6ff91"} err="failed to get container status \"b50541ad3e512cc4dea328b45886bb9c17662fb39260a2cbf961ba2a0be6ff91\": rpc error: code = NotFound desc = could not find container \"b50541ad3e512cc4dea328b45886bb9c17662fb39260a2cbf961ba2a0be6ff91\": container with ID starting with b50541ad3e512cc4dea328b45886bb9c17662fb39260a2cbf961ba2a0be6ff91 not found: ID does not exist" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.200819 4751 scope.go:117] "RemoveContainer" containerID="0d322c5e3834de44a2bc28394d54b57ce1edfd2b2f61ffa2b8da3ccaad796701" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.201081 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d322c5e3834de44a2bc28394d54b57ce1edfd2b2f61ffa2b8da3ccaad796701\": container with ID starting with 0d322c5e3834de44a2bc28394d54b57ce1edfd2b2f61ffa2b8da3ccaad796701 not found: ID does not exist" containerID="0d322c5e3834de44a2bc28394d54b57ce1edfd2b2f61ffa2b8da3ccaad796701" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.201119 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d322c5e3834de44a2bc28394d54b57ce1edfd2b2f61ffa2b8da3ccaad796701"} err="failed to get container status \"0d322c5e3834de44a2bc28394d54b57ce1edfd2b2f61ffa2b8da3ccaad796701\": rpc error: code = NotFound desc = could not find container \"0d322c5e3834de44a2bc28394d54b57ce1edfd2b2f61ffa2b8da3ccaad796701\": container with ID starting with 0d322c5e3834de44a2bc28394d54b57ce1edfd2b2f61ffa2b8da3ccaad796701 not found: ID does not exist" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.594206 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n5g7x"] Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.594834 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f294d30-8a47-4f81-930d-3c0bbf564a2e" containerName="extract-utilities" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.594871 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f294d30-8a47-4f81-930d-3c0bbf564a2e" containerName="extract-utilities" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.594894 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a24b1f1-0656-41ef-826d-c6c40f96b470" containerName="extract-content" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.594909 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a24b1f1-0656-41ef-826d-c6c40f96b470" containerName="extract-content" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.594927 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.594940 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.594961 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="448ce159-6181-433b-a28a-d00b9240b5af" containerName="extract-content" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.594974 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="448ce159-6181-433b-a28a-d00b9240b5af" containerName="extract-content" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.594991 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac49c6a1-fa74-49f3-ba94-c5a469df4a93" containerName="extract-utilities" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595002 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac49c6a1-fa74-49f3-ba94-c5a469df4a93" containerName="extract-utilities" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595020 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a24b1f1-0656-41ef-826d-c6c40f96b470" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595033 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a24b1f1-0656-41ef-826d-c6c40f96b470" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595051 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac49c6a1-fa74-49f3-ba94-c5a469df4a93" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595063 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac49c6a1-fa74-49f3-ba94-c5a469df4a93" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595082 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac49c6a1-fa74-49f3-ba94-c5a469df4a93" containerName="extract-content" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595095 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac49c6a1-fa74-49f3-ba94-c5a469df4a93" containerName="extract-content" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595117 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a791b2a3-aead-4130-bdfa-e219f2d47593" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595133 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a791b2a3-aead-4130-bdfa-e219f2d47593" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595157 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f294d30-8a47-4f81-930d-3c0bbf564a2e" containerName="extract-content" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595172 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f294d30-8a47-4f81-930d-3c0bbf564a2e" containerName="extract-content" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595193 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" containerName="extract-content" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595207 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" containerName="extract-content" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595226 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" containerName="extract-utilities" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595238 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" containerName="extract-utilities" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595261 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a24b1f1-0656-41ef-826d-c6c40f96b470" containerName="extract-utilities" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595276 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a24b1f1-0656-41ef-826d-c6c40f96b470" containerName="extract-utilities" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595295 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcb33b0-97a6-4ded-96b6-1c5bd9053977" containerName="marketplace-operator" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595308 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcb33b0-97a6-4ded-96b6-1c5bd9053977" containerName="marketplace-operator" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595323 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="448ce159-6181-433b-a28a-d00b9240b5af" containerName="extract-utilities" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595365 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="448ce159-6181-433b-a28a-d00b9240b5af" containerName="extract-utilities" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595379 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="448ce159-6181-433b-a28a-d00b9240b5af" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595393 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="448ce159-6181-433b-a28a-d00b9240b5af" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595414 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a791b2a3-aead-4130-bdfa-e219f2d47593" containerName="extract-utilities" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595426 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a791b2a3-aead-4130-bdfa-e219f2d47593" containerName="extract-utilities" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595441 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f294d30-8a47-4f81-930d-3c0bbf564a2e" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595453 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f294d30-8a47-4f81-930d-3c0bbf564a2e" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595467 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a791b2a3-aead-4130-bdfa-e219f2d47593" containerName="extract-content" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595480 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a791b2a3-aead-4130-bdfa-e219f2d47593" containerName="extract-content" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595696 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f294d30-8a47-4f81-930d-3c0bbf564a2e" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595714 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a791b2a3-aead-4130-bdfa-e219f2d47593" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595748 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac49c6a1-fa74-49f3-ba94-c5a469df4a93" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595761 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="448ce159-6181-433b-a28a-d00b9240b5af" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595783 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595802 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdcb33b0-97a6-4ded-96b6-1c5bd9053977" containerName="marketplace-operator" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595820 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a24b1f1-0656-41ef-826d-c6c40f96b470" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.597850 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.600417 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.602989 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5g7x"] Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.715147 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b187a442-317c-42c9-ba1a-ff41e0b9bc90-utilities\") pod \"redhat-marketplace-n5g7x\" (UID: \"b187a442-317c-42c9-ba1a-ff41e0b9bc90\") " pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.715472 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b187a442-317c-42c9-ba1a-ff41e0b9bc90-catalog-content\") pod \"redhat-marketplace-n5g7x\" (UID: \"b187a442-317c-42c9-ba1a-ff41e0b9bc90\") " pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.715653 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7kc9\" (UniqueName: \"kubernetes.io/projected/b187a442-317c-42c9-ba1a-ff41e0b9bc90-kube-api-access-w7kc9\") pod \"redhat-marketplace-n5g7x\" (UID: \"b187a442-317c-42c9-ba1a-ff41e0b9bc90\") " pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.787914 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zps7r"] Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.789907 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.801723 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zps7r"] Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.802700 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.817271 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fac62ab3-6625-4680-a70b-235f054baa64-catalog-content\") pod \"redhat-operators-zps7r\" (UID: \"fac62ab3-6625-4680-a70b-235f054baa64\") " pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.817384 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7kc9\" (UniqueName: \"kubernetes.io/projected/b187a442-317c-42c9-ba1a-ff41e0b9bc90-kube-api-access-w7kc9\") pod \"redhat-marketplace-n5g7x\" (UID: \"b187a442-317c-42c9-ba1a-ff41e0b9bc90\") " pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.817431 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbs7k\" (UniqueName: \"kubernetes.io/projected/fac62ab3-6625-4680-a70b-235f054baa64-kube-api-access-pbs7k\") pod \"redhat-operators-zps7r\" (UID: \"fac62ab3-6625-4680-a70b-235f054baa64\") " pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.817538 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b187a442-317c-42c9-ba1a-ff41e0b9bc90-utilities\") pod \"redhat-marketplace-n5g7x\" (UID: \"b187a442-317c-42c9-ba1a-ff41e0b9bc90\") " pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.817645 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fac62ab3-6625-4680-a70b-235f054baa64-utilities\") pod \"redhat-operators-zps7r\" (UID: \"fac62ab3-6625-4680-a70b-235f054baa64\") " pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.817684 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b187a442-317c-42c9-ba1a-ff41e0b9bc90-catalog-content\") pod \"redhat-marketplace-n5g7x\" (UID: \"b187a442-317c-42c9-ba1a-ff41e0b9bc90\") " pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.818193 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b187a442-317c-42c9-ba1a-ff41e0b9bc90-utilities\") pod \"redhat-marketplace-n5g7x\" (UID: \"b187a442-317c-42c9-ba1a-ff41e0b9bc90\") " pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.818271 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b187a442-317c-42c9-ba1a-ff41e0b9bc90-catalog-content\") pod \"redhat-marketplace-n5g7x\" (UID: \"b187a442-317c-42c9-ba1a-ff41e0b9bc90\") " pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.844777 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7kc9\" (UniqueName: \"kubernetes.io/projected/b187a442-317c-42c9-ba1a-ff41e0b9bc90-kube-api-access-w7kc9\") pod \"redhat-marketplace-n5g7x\" (UID: \"b187a442-317c-42c9-ba1a-ff41e0b9bc90\") " pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.919192 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fac62ab3-6625-4680-a70b-235f054baa64-utilities\") pod \"redhat-operators-zps7r\" (UID: \"fac62ab3-6625-4680-a70b-235f054baa64\") " pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.919277 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fac62ab3-6625-4680-a70b-235f054baa64-catalog-content\") pod \"redhat-operators-zps7r\" (UID: \"fac62ab3-6625-4680-a70b-235f054baa64\") " pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.919309 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbs7k\" (UniqueName: \"kubernetes.io/projected/fac62ab3-6625-4680-a70b-235f054baa64-kube-api-access-pbs7k\") pod \"redhat-operators-zps7r\" (UID: \"fac62ab3-6625-4680-a70b-235f054baa64\") " pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.919851 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fac62ab3-6625-4680-a70b-235f054baa64-catalog-content\") pod \"redhat-operators-zps7r\" (UID: \"fac62ab3-6625-4680-a70b-235f054baa64\") " pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.920082 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fac62ab3-6625-4680-a70b-235f054baa64-utilities\") pod \"redhat-operators-zps7r\" (UID: \"fac62ab3-6625-4680-a70b-235f054baa64\") " pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.927082 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.940425 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbs7k\" (UniqueName: \"kubernetes.io/projected/fac62ab3-6625-4680-a70b-235f054baa64-kube-api-access-pbs7k\") pod \"redhat-operators-zps7r\" (UID: \"fac62ab3-6625-4680-a70b-235f054baa64\") " pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.985888 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f294d30-8a47-4f81-930d-3c0bbf564a2e" path="/var/lib/kubelet/pods/3f294d30-8a47-4f81-930d-3c0bbf564a2e/volumes" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.994144 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="448ce159-6181-433b-a28a-d00b9240b5af" path="/var/lib/kubelet/pods/448ce159-6181-433b-a28a-d00b9240b5af/volumes" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.994850 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" path="/var/lib/kubelet/pods/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b/volumes" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.996253 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a24b1f1-0656-41ef-826d-c6c40f96b470" path="/var/lib/kubelet/pods/6a24b1f1-0656-41ef-826d-c6c40f96b470/volumes" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.996853 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a791b2a3-aead-4130-bdfa-e219f2d47593" path="/var/lib/kubelet/pods/a791b2a3-aead-4130-bdfa-e219f2d47593/volumes" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.997564 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac49c6a1-fa74-49f3-ba94-c5a469df4a93" path="/var/lib/kubelet/pods/ac49c6a1-fa74-49f3-ba94-c5a469df4a93/volumes" Jan 30 21:30:26 crc kubenswrapper[4751]: I0130 21:30:26.002039 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdcb33b0-97a6-4ded-96b6-1c5bd9053977" path="/var/lib/kubelet/pods/cdcb33b0-97a6-4ded-96b6-1c5bd9053977/volumes" Jan 30 21:30:26 crc kubenswrapper[4751]: I0130 21:30:26.111702 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:26 crc kubenswrapper[4751]: I0130 21:30:26.348482 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5g7x"] Jan 30 21:30:26 crc kubenswrapper[4751]: W0130 21:30:26.352264 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb187a442_317c_42c9_ba1a_ff41e0b9bc90.slice/crio-c92ffc172810c61f06b70d401c1d320d022431dbc7183c734478e3087ca6419d WatchSource:0}: Error finding container c92ffc172810c61f06b70d401c1d320d022431dbc7183c734478e3087ca6419d: Status 404 returned error can't find the container with id c92ffc172810c61f06b70d401c1d320d022431dbc7183c734478e3087ca6419d Jan 30 21:30:26 crc kubenswrapper[4751]: I0130 21:30:26.540480 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zps7r"] Jan 30 21:30:26 crc kubenswrapper[4751]: W0130 21:30:26.605646 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfac62ab3_6625_4680_a70b_235f054baa64.slice/crio-692e0d371b83b0506b343dd1acdb9421c1737e030f4103c895488a224c673bd2 WatchSource:0}: Error finding container 692e0d371b83b0506b343dd1acdb9421c1737e030f4103c895488a224c673bd2: Status 404 returned error can't find the container with id 692e0d371b83b0506b343dd1acdb9421c1737e030f4103c895488a224c673bd2 Jan 30 21:30:27 crc kubenswrapper[4751]: I0130 21:30:27.116369 4751 generic.go:334] "Generic (PLEG): container finished" podID="fac62ab3-6625-4680-a70b-235f054baa64" containerID="54fe6635e439fdc023584a1a5e0a30703481be8132e20d29f58e8655731f3350" exitCode=0 Jan 30 21:30:27 crc kubenswrapper[4751]: I0130 21:30:27.116444 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zps7r" event={"ID":"fac62ab3-6625-4680-a70b-235f054baa64","Type":"ContainerDied","Data":"54fe6635e439fdc023584a1a5e0a30703481be8132e20d29f58e8655731f3350"} Jan 30 21:30:27 crc kubenswrapper[4751]: I0130 21:30:27.116886 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zps7r" event={"ID":"fac62ab3-6625-4680-a70b-235f054baa64","Type":"ContainerStarted","Data":"692e0d371b83b0506b343dd1acdb9421c1737e030f4103c895488a224c673bd2"} Jan 30 21:30:27 crc kubenswrapper[4751]: I0130 21:30:27.119727 4751 generic.go:334] "Generic (PLEG): container finished" podID="b187a442-317c-42c9-ba1a-ff41e0b9bc90" containerID="6fc910528e000147fc6e9e3ad8eb2e6b5c8b82173f5d9a6e1f8d75667e00a7a0" exitCode=0 Jan 30 21:30:27 crc kubenswrapper[4751]: I0130 21:30:27.119759 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5g7x" event={"ID":"b187a442-317c-42c9-ba1a-ff41e0b9bc90","Type":"ContainerDied","Data":"6fc910528e000147fc6e9e3ad8eb2e6b5c8b82173f5d9a6e1f8d75667e00a7a0"} Jan 30 21:30:27 crc kubenswrapper[4751]: I0130 21:30:27.119780 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5g7x" event={"ID":"b187a442-317c-42c9-ba1a-ff41e0b9bc90","Type":"ContainerStarted","Data":"c92ffc172810c61f06b70d401c1d320d022431dbc7183c734478e3087ca6419d"} Jan 30 21:30:27 crc kubenswrapper[4751]: I0130 21:30:27.994601 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kcjb7"] Jan 30 21:30:27 crc kubenswrapper[4751]: I0130 21:30:27.996097 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kcjb7"] Jan 30 21:30:27 crc kubenswrapper[4751]: I0130 21:30:27.996205 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:27 crc kubenswrapper[4751]: I0130 21:30:27.999108 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.061160 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-catalog-content\") pod \"certified-operators-kcjb7\" (UID: \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\") " pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.061256 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-utilities\") pod \"certified-operators-kcjb7\" (UID: \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\") " pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.061289 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbtz7\" (UniqueName: \"kubernetes.io/projected/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-kube-api-access-qbtz7\") pod \"certified-operators-kcjb7\" (UID: \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\") " pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.130594 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zps7r" event={"ID":"fac62ab3-6625-4680-a70b-235f054baa64","Type":"ContainerStarted","Data":"faab27cb6633bc44cac4e500c3c4abca1f48a467525fb1ddbfb21c5a35e15305"} Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.133708 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5g7x" event={"ID":"b187a442-317c-42c9-ba1a-ff41e0b9bc90","Type":"ContainerStarted","Data":"fc85a74dbb30af00d8b2934bf49b3ea553d7c16a81fa37ea788918daa04a73fa"} Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.163492 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-catalog-content\") pod \"certified-operators-kcjb7\" (UID: \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\") " pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.163563 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-utilities\") pod \"certified-operators-kcjb7\" (UID: \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\") " pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.163586 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbtz7\" (UniqueName: \"kubernetes.io/projected/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-kube-api-access-qbtz7\") pod \"certified-operators-kcjb7\" (UID: \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\") " pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.164370 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-catalog-content\") pod \"certified-operators-kcjb7\" (UID: \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\") " pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.164644 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-utilities\") pod \"certified-operators-kcjb7\" (UID: \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\") " pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.187193 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qvqpq"] Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.192044 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbtz7\" (UniqueName: \"kubernetes.io/projected/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-kube-api-access-qbtz7\") pod \"certified-operators-kcjb7\" (UID: \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\") " pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.200412 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qvqpq"] Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.200511 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.204726 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.264790 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f675e6ee-15d0-4fa7-94ec-c08976e45a20-catalog-content\") pod \"community-operators-qvqpq\" (UID: \"f675e6ee-15d0-4fa7-94ec-c08976e45a20\") " pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.264849 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbd6r\" (UniqueName: \"kubernetes.io/projected/f675e6ee-15d0-4fa7-94ec-c08976e45a20-kube-api-access-zbd6r\") pod \"community-operators-qvqpq\" (UID: \"f675e6ee-15d0-4fa7-94ec-c08976e45a20\") " pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.264915 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f675e6ee-15d0-4fa7-94ec-c08976e45a20-utilities\") pod \"community-operators-qvqpq\" (UID: \"f675e6ee-15d0-4fa7-94ec-c08976e45a20\") " pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.335737 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.367045 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f675e6ee-15d0-4fa7-94ec-c08976e45a20-catalog-content\") pod \"community-operators-qvqpq\" (UID: \"f675e6ee-15d0-4fa7-94ec-c08976e45a20\") " pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.367163 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbd6r\" (UniqueName: \"kubernetes.io/projected/f675e6ee-15d0-4fa7-94ec-c08976e45a20-kube-api-access-zbd6r\") pod \"community-operators-qvqpq\" (UID: \"f675e6ee-15d0-4fa7-94ec-c08976e45a20\") " pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.367351 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f675e6ee-15d0-4fa7-94ec-c08976e45a20-utilities\") pod \"community-operators-qvqpq\" (UID: \"f675e6ee-15d0-4fa7-94ec-c08976e45a20\") " pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.368131 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f675e6ee-15d0-4fa7-94ec-c08976e45a20-catalog-content\") pod \"community-operators-qvqpq\" (UID: \"f675e6ee-15d0-4fa7-94ec-c08976e45a20\") " pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.368611 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f675e6ee-15d0-4fa7-94ec-c08976e45a20-utilities\") pod \"community-operators-qvqpq\" (UID: \"f675e6ee-15d0-4fa7-94ec-c08976e45a20\") " pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.389426 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbd6r\" (UniqueName: \"kubernetes.io/projected/f675e6ee-15d0-4fa7-94ec-c08976e45a20-kube-api-access-zbd6r\") pod \"community-operators-qvqpq\" (UID: \"f675e6ee-15d0-4fa7-94ec-c08976e45a20\") " pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.532629 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.817042 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kcjb7"] Jan 30 21:30:28 crc kubenswrapper[4751]: W0130 21:30:28.825632 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0c60341_0c3e_4be5_a2a2_e5a4ed9b5776.slice/crio-95551a4e245bf6ad27a1b26cb62b9724a5be7406b4d6229b016888f12ca7d6d4 WatchSource:0}: Error finding container 95551a4e245bf6ad27a1b26cb62b9724a5be7406b4d6229b016888f12ca7d6d4: Status 404 returned error can't find the container with id 95551a4e245bf6ad27a1b26cb62b9724a5be7406b4d6229b016888f12ca7d6d4 Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.974076 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qvqpq"] Jan 30 21:30:28 crc kubenswrapper[4751]: W0130 21:30:28.979534 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf675e6ee_15d0_4fa7_94ec_c08976e45a20.slice/crio-8e90208d71032a7d08484761f34efbdfd42b0aa3c41a12f2d7f0b42eda0edff4 WatchSource:0}: Error finding container 8e90208d71032a7d08484761f34efbdfd42b0aa3c41a12f2d7f0b42eda0edff4: Status 404 returned error can't find the container with id 8e90208d71032a7d08484761f34efbdfd42b0aa3c41a12f2d7f0b42eda0edff4 Jan 30 21:30:29 crc kubenswrapper[4751]: I0130 21:30:29.145264 4751 generic.go:334] "Generic (PLEG): container finished" podID="e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" containerID="f0ff7f17884024cadb59819e4114f64f13e4c4199dcbe665c88b3d9400eb196b" exitCode=0 Jan 30 21:30:29 crc kubenswrapper[4751]: I0130 21:30:29.145365 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcjb7" event={"ID":"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776","Type":"ContainerDied","Data":"f0ff7f17884024cadb59819e4114f64f13e4c4199dcbe665c88b3d9400eb196b"} Jan 30 21:30:29 crc kubenswrapper[4751]: I0130 21:30:29.146437 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcjb7" event={"ID":"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776","Type":"ContainerStarted","Data":"95551a4e245bf6ad27a1b26cb62b9724a5be7406b4d6229b016888f12ca7d6d4"} Jan 30 21:30:29 crc kubenswrapper[4751]: I0130 21:30:29.152238 4751 generic.go:334] "Generic (PLEG): container finished" podID="fac62ab3-6625-4680-a70b-235f054baa64" containerID="faab27cb6633bc44cac4e500c3c4abca1f48a467525fb1ddbfb21c5a35e15305" exitCode=0 Jan 30 21:30:29 crc kubenswrapper[4751]: I0130 21:30:29.153239 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zps7r" event={"ID":"fac62ab3-6625-4680-a70b-235f054baa64","Type":"ContainerDied","Data":"faab27cb6633bc44cac4e500c3c4abca1f48a467525fb1ddbfb21c5a35e15305"} Jan 30 21:30:29 crc kubenswrapper[4751]: I0130 21:30:29.156622 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvqpq" event={"ID":"f675e6ee-15d0-4fa7-94ec-c08976e45a20","Type":"ContainerStarted","Data":"8e90208d71032a7d08484761f34efbdfd42b0aa3c41a12f2d7f0b42eda0edff4"} Jan 30 21:30:29 crc kubenswrapper[4751]: I0130 21:30:29.163164 4751 generic.go:334] "Generic (PLEG): container finished" podID="b187a442-317c-42c9-ba1a-ff41e0b9bc90" containerID="fc85a74dbb30af00d8b2934bf49b3ea553d7c16a81fa37ea788918daa04a73fa" exitCode=0 Jan 30 21:30:29 crc kubenswrapper[4751]: I0130 21:30:29.163215 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5g7x" event={"ID":"b187a442-317c-42c9-ba1a-ff41e0b9bc90","Type":"ContainerDied","Data":"fc85a74dbb30af00d8b2934bf49b3ea553d7c16a81fa37ea788918daa04a73fa"} Jan 30 21:30:30 crc kubenswrapper[4751]: I0130 21:30:30.177982 4751 generic.go:334] "Generic (PLEG): container finished" podID="e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" containerID="b9102fc49cd164d867074d03c63d8593be70d6d663c1f645db5a7cf70fe3ec65" exitCode=0 Jan 30 21:30:30 crc kubenswrapper[4751]: I0130 21:30:30.178082 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcjb7" event={"ID":"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776","Type":"ContainerDied","Data":"b9102fc49cd164d867074d03c63d8593be70d6d663c1f645db5a7cf70fe3ec65"} Jan 30 21:30:30 crc kubenswrapper[4751]: I0130 21:30:30.183187 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zps7r" event={"ID":"fac62ab3-6625-4680-a70b-235f054baa64","Type":"ContainerStarted","Data":"10cc6a49d8f6ffeef2e7fac35d00495ce9eb4ba01ab3857583f55707658823c3"} Jan 30 21:30:30 crc kubenswrapper[4751]: I0130 21:30:30.189009 4751 generic.go:334] "Generic (PLEG): container finished" podID="f675e6ee-15d0-4fa7-94ec-c08976e45a20" containerID="28b79fec40c55c7e7f50dba6771bbbaaa9daf29c3b34bea51e8235c501364afe" exitCode=0 Jan 30 21:30:30 crc kubenswrapper[4751]: I0130 21:30:30.189091 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvqpq" event={"ID":"f675e6ee-15d0-4fa7-94ec-c08976e45a20","Type":"ContainerDied","Data":"28b79fec40c55c7e7f50dba6771bbbaaa9daf29c3b34bea51e8235c501364afe"} Jan 30 21:30:30 crc kubenswrapper[4751]: I0130 21:30:30.192200 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5g7x" event={"ID":"b187a442-317c-42c9-ba1a-ff41e0b9bc90","Type":"ContainerStarted","Data":"f59145b3135d1eddb1361429c0bb16e51dba7bb7af33f696367ab8138509f419"} Jan 30 21:30:30 crc kubenswrapper[4751]: I0130 21:30:30.260392 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zps7r" podStartSLOduration=2.780505939 podStartE2EDuration="5.260320341s" podCreationTimestamp="2026-01-30 21:30:25 +0000 UTC" firstStartedPulling="2026-01-30 21:30:27.120186165 +0000 UTC m=+965.866008814" lastFinishedPulling="2026-01-30 21:30:29.600000557 +0000 UTC m=+968.345823216" observedRunningTime="2026-01-30 21:30:30.253423857 +0000 UTC m=+968.999246516" watchObservedRunningTime="2026-01-30 21:30:30.260320341 +0000 UTC m=+969.006142990" Jan 30 21:30:30 crc kubenswrapper[4751]: I0130 21:30:30.273215 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n5g7x" podStartSLOduration=2.774970921 podStartE2EDuration="5.273199277s" podCreationTimestamp="2026-01-30 21:30:25 +0000 UTC" firstStartedPulling="2026-01-30 21:30:27.121702365 +0000 UTC m=+965.867525014" lastFinishedPulling="2026-01-30 21:30:29.619930701 +0000 UTC m=+968.365753370" observedRunningTime="2026-01-30 21:30:30.269088097 +0000 UTC m=+969.014910756" watchObservedRunningTime="2026-01-30 21:30:30.273199277 +0000 UTC m=+969.019021926" Jan 30 21:30:31 crc kubenswrapper[4751]: I0130 21:30:31.211609 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvqpq" event={"ID":"f675e6ee-15d0-4fa7-94ec-c08976e45a20","Type":"ContainerStarted","Data":"5483ad83271f18d69e048ebc2aee6a4fed47d89d32c14da67667286931d2f980"} Jan 30 21:30:31 crc kubenswrapper[4751]: I0130 21:30:31.214442 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcjb7" event={"ID":"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776","Type":"ContainerStarted","Data":"866096c9e0962b4450aeafceff9a6e799efcc8a53a6c4825b431141eedcb2cac"} Jan 30 21:30:31 crc kubenswrapper[4751]: I0130 21:30:31.260230 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kcjb7" podStartSLOduration=2.82838395 podStartE2EDuration="4.260209981s" podCreationTimestamp="2026-01-30 21:30:27 +0000 UTC" firstStartedPulling="2026-01-30 21:30:29.150047992 +0000 UTC m=+967.895870641" lastFinishedPulling="2026-01-30 21:30:30.581874013 +0000 UTC m=+969.327696672" observedRunningTime="2026-01-30 21:30:31.258994869 +0000 UTC m=+970.004817528" watchObservedRunningTime="2026-01-30 21:30:31.260209981 +0000 UTC m=+970.006032640" Jan 30 21:30:32 crc kubenswrapper[4751]: I0130 21:30:32.225877 4751 generic.go:334] "Generic (PLEG): container finished" podID="f675e6ee-15d0-4fa7-94ec-c08976e45a20" containerID="5483ad83271f18d69e048ebc2aee6a4fed47d89d32c14da67667286931d2f980" exitCode=0 Jan 30 21:30:32 crc kubenswrapper[4751]: I0130 21:30:32.226012 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvqpq" event={"ID":"f675e6ee-15d0-4fa7-94ec-c08976e45a20","Type":"ContainerDied","Data":"5483ad83271f18d69e048ebc2aee6a4fed47d89d32c14da67667286931d2f980"} Jan 30 21:30:33 crc kubenswrapper[4751]: I0130 21:30:33.235531 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvqpq" event={"ID":"f675e6ee-15d0-4fa7-94ec-c08976e45a20","Type":"ContainerStarted","Data":"8db2f5da25579fcf378ad90f3244544b9833c335b30adddc132277b2aa70b810"} Jan 30 21:30:33 crc kubenswrapper[4751]: I0130 21:30:33.266678 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qvqpq" podStartSLOduration=2.859830926 podStartE2EDuration="5.266659351s" podCreationTimestamp="2026-01-30 21:30:28 +0000 UTC" firstStartedPulling="2026-01-30 21:30:30.194602049 +0000 UTC m=+968.940424698" lastFinishedPulling="2026-01-30 21:30:32.601430474 +0000 UTC m=+971.347253123" observedRunningTime="2026-01-30 21:30:33.260805274 +0000 UTC m=+972.006627933" watchObservedRunningTime="2026-01-30 21:30:33.266659351 +0000 UTC m=+972.012482010" Jan 30 21:30:35 crc kubenswrapper[4751]: I0130 21:30:35.928048 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:35 crc kubenswrapper[4751]: I0130 21:30:35.928382 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:35 crc kubenswrapper[4751]: I0130 21:30:35.997613 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:36 crc kubenswrapper[4751]: I0130 21:30:36.112196 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:36 crc kubenswrapper[4751]: I0130 21:30:36.112268 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:36 crc kubenswrapper[4751]: I0130 21:30:36.339543 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:37 crc kubenswrapper[4751]: I0130 21:30:37.192350 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zps7r" podUID="fac62ab3-6625-4680-a70b-235f054baa64" containerName="registry-server" probeResult="failure" output=< Jan 30 21:30:37 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:30:37 crc kubenswrapper[4751]: > Jan 30 21:30:38 crc kubenswrapper[4751]: I0130 21:30:38.336346 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:38 crc kubenswrapper[4751]: I0130 21:30:38.336427 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:38 crc kubenswrapper[4751]: I0130 21:30:38.429843 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:38 crc kubenswrapper[4751]: I0130 21:30:38.533206 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:38 crc kubenswrapper[4751]: I0130 21:30:38.533266 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:38 crc kubenswrapper[4751]: I0130 21:30:38.589268 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:39 crc kubenswrapper[4751]: I0130 21:30:39.338698 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:39 crc kubenswrapper[4751]: I0130 21:30:39.373040 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:46 crc kubenswrapper[4751]: I0130 21:30:46.183283 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:46 crc kubenswrapper[4751]: I0130 21:30:46.233810 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.052727 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh"] Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.054601 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.057131 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.074700 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh"] Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.149519 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eac36070-4c04-460f-bfbb-e77659bad07e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh\" (UID: \"eac36070-4c04-460f-bfbb-e77659bad07e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.149605 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph68p\" (UniqueName: \"kubernetes.io/projected/eac36070-4c04-460f-bfbb-e77659bad07e-kube-api-access-ph68p\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh\" (UID: \"eac36070-4c04-460f-bfbb-e77659bad07e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.149676 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eac36070-4c04-460f-bfbb-e77659bad07e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh\" (UID: \"eac36070-4c04-460f-bfbb-e77659bad07e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.251058 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eac36070-4c04-460f-bfbb-e77659bad07e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh\" (UID: \"eac36070-4c04-460f-bfbb-e77659bad07e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.251214 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eac36070-4c04-460f-bfbb-e77659bad07e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh\" (UID: \"eac36070-4c04-460f-bfbb-e77659bad07e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.251261 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph68p\" (UniqueName: \"kubernetes.io/projected/eac36070-4c04-460f-bfbb-e77659bad07e-kube-api-access-ph68p\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh\" (UID: \"eac36070-4c04-460f-bfbb-e77659bad07e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.251732 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eac36070-4c04-460f-bfbb-e77659bad07e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh\" (UID: \"eac36070-4c04-460f-bfbb-e77659bad07e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.251925 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eac36070-4c04-460f-bfbb-e77659bad07e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh\" (UID: \"eac36070-4c04-460f-bfbb-e77659bad07e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.272888 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph68p\" (UniqueName: \"kubernetes.io/projected/eac36070-4c04-460f-bfbb-e77659bad07e-kube-api-access-ph68p\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh\" (UID: \"eac36070-4c04-460f-bfbb-e77659bad07e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.424584 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.938657 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh"] Jan 30 21:30:59 crc kubenswrapper[4751]: I0130 21:30:59.627276 4751 generic.go:334] "Generic (PLEG): container finished" podID="eac36070-4c04-460f-bfbb-e77659bad07e" containerID="fe731b5cf787abbf126d827b1bca7991122721d525a7695c05666bcacf428912" exitCode=0 Jan 30 21:30:59 crc kubenswrapper[4751]: I0130 21:30:59.627342 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" event={"ID":"eac36070-4c04-460f-bfbb-e77659bad07e","Type":"ContainerDied","Data":"fe731b5cf787abbf126d827b1bca7991122721d525a7695c05666bcacf428912"} Jan 30 21:30:59 crc kubenswrapper[4751]: I0130 21:30:59.627373 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" event={"ID":"eac36070-4c04-460f-bfbb-e77659bad07e","Type":"ContainerStarted","Data":"e6eca3e065969b7e901287ac9ec8650a7e9ad46bb1c8233e59e733732aaf56e6"} Jan 30 21:30:59 crc kubenswrapper[4751]: I0130 21:30:59.629265 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 21:31:01 crc kubenswrapper[4751]: I0130 21:31:01.655741 4751 generic.go:334] "Generic (PLEG): container finished" podID="eac36070-4c04-460f-bfbb-e77659bad07e" containerID="16efb573e2aed225b5226c4007411ef3aa051dbd38a6bf1e12978c5bc781d705" exitCode=0 Jan 30 21:31:01 crc kubenswrapper[4751]: I0130 21:31:01.655793 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" event={"ID":"eac36070-4c04-460f-bfbb-e77659bad07e","Type":"ContainerDied","Data":"16efb573e2aed225b5226c4007411ef3aa051dbd38a6bf1e12978c5bc781d705"} Jan 30 21:31:02 crc kubenswrapper[4751]: I0130 21:31:02.672564 4751 generic.go:334] "Generic (PLEG): container finished" podID="eac36070-4c04-460f-bfbb-e77659bad07e" containerID="f41e2a972da4eebbe257cb3cf8c41d744879120c462a9471dc47984afeb89ac5" exitCode=0 Jan 30 21:31:02 crc kubenswrapper[4751]: I0130 21:31:02.672929 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" event={"ID":"eac36070-4c04-460f-bfbb-e77659bad07e","Type":"ContainerDied","Data":"f41e2a972da4eebbe257cb3cf8c41d744879120c462a9471dc47984afeb89ac5"} Jan 30 21:31:03 crc kubenswrapper[4751]: I0130 21:31:03.994058 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" Jan 30 21:31:04 crc kubenswrapper[4751]: I0130 21:31:04.149855 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph68p\" (UniqueName: \"kubernetes.io/projected/eac36070-4c04-460f-bfbb-e77659bad07e-kube-api-access-ph68p\") pod \"eac36070-4c04-460f-bfbb-e77659bad07e\" (UID: \"eac36070-4c04-460f-bfbb-e77659bad07e\") " Jan 30 21:31:04 crc kubenswrapper[4751]: I0130 21:31:04.149916 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eac36070-4c04-460f-bfbb-e77659bad07e-util\") pod \"eac36070-4c04-460f-bfbb-e77659bad07e\" (UID: \"eac36070-4c04-460f-bfbb-e77659bad07e\") " Jan 30 21:31:04 crc kubenswrapper[4751]: I0130 21:31:04.150094 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eac36070-4c04-460f-bfbb-e77659bad07e-bundle\") pod \"eac36070-4c04-460f-bfbb-e77659bad07e\" (UID: \"eac36070-4c04-460f-bfbb-e77659bad07e\") " Jan 30 21:31:04 crc kubenswrapper[4751]: I0130 21:31:04.150842 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eac36070-4c04-460f-bfbb-e77659bad07e-bundle" (OuterVolumeSpecName: "bundle") pod "eac36070-4c04-460f-bfbb-e77659bad07e" (UID: "eac36070-4c04-460f-bfbb-e77659bad07e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:31:04 crc kubenswrapper[4751]: I0130 21:31:04.157557 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eac36070-4c04-460f-bfbb-e77659bad07e-kube-api-access-ph68p" (OuterVolumeSpecName: "kube-api-access-ph68p") pod "eac36070-4c04-460f-bfbb-e77659bad07e" (UID: "eac36070-4c04-460f-bfbb-e77659bad07e"). InnerVolumeSpecName "kube-api-access-ph68p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:31:04 crc kubenswrapper[4751]: I0130 21:31:04.163560 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eac36070-4c04-460f-bfbb-e77659bad07e-util" (OuterVolumeSpecName: "util") pod "eac36070-4c04-460f-bfbb-e77659bad07e" (UID: "eac36070-4c04-460f-bfbb-e77659bad07e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:31:04 crc kubenswrapper[4751]: I0130 21:31:04.252232 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph68p\" (UniqueName: \"kubernetes.io/projected/eac36070-4c04-460f-bfbb-e77659bad07e-kube-api-access-ph68p\") on node \"crc\" DevicePath \"\"" Jan 30 21:31:04 crc kubenswrapper[4751]: I0130 21:31:04.252656 4751 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eac36070-4c04-460f-bfbb-e77659bad07e-util\") on node \"crc\" DevicePath \"\"" Jan 30 21:31:04 crc kubenswrapper[4751]: I0130 21:31:04.252786 4751 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eac36070-4c04-460f-bfbb-e77659bad07e-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:31:04 crc kubenswrapper[4751]: I0130 21:31:04.693664 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" event={"ID":"eac36070-4c04-460f-bfbb-e77659bad07e","Type":"ContainerDied","Data":"e6eca3e065969b7e901287ac9ec8650a7e9ad46bb1c8233e59e733732aaf56e6"} Jan 30 21:31:04 crc kubenswrapper[4751]: I0130 21:31:04.693713 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6eca3e065969b7e901287ac9ec8650a7e9ad46bb1c8233e59e733732aaf56e6" Jan 30 21:31:04 crc kubenswrapper[4751]: I0130 21:31:04.693777 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.044260 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-k49vc"] Jan 30 21:31:08 crc kubenswrapper[4751]: E0130 21:31:08.045282 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eac36070-4c04-460f-bfbb-e77659bad07e" containerName="extract" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.045304 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="eac36070-4c04-460f-bfbb-e77659bad07e" containerName="extract" Jan 30 21:31:08 crc kubenswrapper[4751]: E0130 21:31:08.045576 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eac36070-4c04-460f-bfbb-e77659bad07e" containerName="pull" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.045595 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="eac36070-4c04-460f-bfbb-e77659bad07e" containerName="pull" Jan 30 21:31:08 crc kubenswrapper[4751]: E0130 21:31:08.045631 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eac36070-4c04-460f-bfbb-e77659bad07e" containerName="util" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.045644 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="eac36070-4c04-460f-bfbb-e77659bad07e" containerName="util" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.045940 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="eac36070-4c04-460f-bfbb-e77659bad07e" containerName="extract" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.046847 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-k49vc" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.048916 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-pkvv8" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.049407 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.051798 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.059212 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-k49vc"] Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.214265 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hhc6\" (UniqueName: \"kubernetes.io/projected/c9f603b5-de3a-4d5e-acc1-6da32a99dcaa-kube-api-access-5hhc6\") pod \"nmstate-operator-646758c888-k49vc\" (UID: \"c9f603b5-de3a-4d5e-acc1-6da32a99dcaa\") " pod="openshift-nmstate/nmstate-operator-646758c888-k49vc" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.315543 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hhc6\" (UniqueName: \"kubernetes.io/projected/c9f603b5-de3a-4d5e-acc1-6da32a99dcaa-kube-api-access-5hhc6\") pod \"nmstate-operator-646758c888-k49vc\" (UID: \"c9f603b5-de3a-4d5e-acc1-6da32a99dcaa\") " pod="openshift-nmstate/nmstate-operator-646758c888-k49vc" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.346146 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hhc6\" (UniqueName: \"kubernetes.io/projected/c9f603b5-de3a-4d5e-acc1-6da32a99dcaa-kube-api-access-5hhc6\") pod \"nmstate-operator-646758c888-k49vc\" (UID: \"c9f603b5-de3a-4d5e-acc1-6da32a99dcaa\") " pod="openshift-nmstate/nmstate-operator-646758c888-k49vc" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.373405 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-k49vc" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.802922 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-k49vc"] Jan 30 21:31:09 crc kubenswrapper[4751]: I0130 21:31:09.745592 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-k49vc" event={"ID":"c9f603b5-de3a-4d5e-acc1-6da32a99dcaa","Type":"ContainerStarted","Data":"71ecaefdf49db2d2399b3d8f99497f479242692b47863a618ecfc9abc36a48fc"} Jan 30 21:31:11 crc kubenswrapper[4751]: I0130 21:31:11.762517 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-k49vc" event={"ID":"c9f603b5-de3a-4d5e-acc1-6da32a99dcaa","Type":"ContainerStarted","Data":"3966b75bb810a7c8f152fabb814d68de8b3f83c8fd1b635840ddabe1f21acc26"} Jan 30 21:31:11 crc kubenswrapper[4751]: I0130 21:31:11.791834 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-k49vc" podStartSLOduration=1.774525896 podStartE2EDuration="3.791813986s" podCreationTimestamp="2026-01-30 21:31:08 +0000 UTC" firstStartedPulling="2026-01-30 21:31:08.819423337 +0000 UTC m=+1007.565245976" lastFinishedPulling="2026-01-30 21:31:10.836711417 +0000 UTC m=+1009.582534066" observedRunningTime="2026-01-30 21:31:11.784591262 +0000 UTC m=+1010.530413921" watchObservedRunningTime="2026-01-30 21:31:11.791813986 +0000 UTC m=+1010.537636635" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.638045 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-rfrtx"] Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.640080 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-rfrtx" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.644511 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-llzl8" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.646664 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv"] Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.647518 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.659807 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.665440 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-rfrtx"] Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.683173 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-d95cp"] Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.684129 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.687685 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv"] Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.719281 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv4c5\" (UniqueName: \"kubernetes.io/projected/be191f8d-d8ce-4f29-95f1-1278c108ca11-kube-api-access-cv4c5\") pod \"nmstate-webhook-8474b5b9d8-7hqmv\" (UID: \"be191f8d-d8ce-4f29-95f1-1278c108ca11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.719319 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7fdh\" (UniqueName: \"kubernetes.io/projected/f0ccd951-df7f-452f-b340-64fa7c9f9916-kube-api-access-p7fdh\") pod \"nmstate-metrics-54757c584b-rfrtx\" (UID: \"f0ccd951-df7f-452f-b340-64fa7c9f9916\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-rfrtx" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.719375 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/be191f8d-d8ce-4f29-95f1-1278c108ca11-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-7hqmv\" (UID: \"be191f8d-d8ce-4f29-95f1-1278c108ca11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.796361 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz"] Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.797194 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.799896 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.800802 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-wglvk" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.806792 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.820159 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz"] Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.820466 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/eea5deed-9d07-45b2-b400-64b7c2336994-dbus-socket\") pod \"nmstate-handler-d95cp\" (UID: \"eea5deed-9d07-45b2-b400-64b7c2336994\") " pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.820502 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv4c5\" (UniqueName: \"kubernetes.io/projected/be191f8d-d8ce-4f29-95f1-1278c108ca11-kube-api-access-cv4c5\") pod \"nmstate-webhook-8474b5b9d8-7hqmv\" (UID: \"be191f8d-d8ce-4f29-95f1-1278c108ca11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.820527 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7fdh\" (UniqueName: \"kubernetes.io/projected/f0ccd951-df7f-452f-b340-64fa7c9f9916-kube-api-access-p7fdh\") pod \"nmstate-metrics-54757c584b-rfrtx\" (UID: \"f0ccd951-df7f-452f-b340-64fa7c9f9916\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-rfrtx" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.820576 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/be191f8d-d8ce-4f29-95f1-1278c108ca11-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-7hqmv\" (UID: \"be191f8d-d8ce-4f29-95f1-1278c108ca11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.820605 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdghn\" (UniqueName: \"kubernetes.io/projected/eea5deed-9d07-45b2-b400-64b7c2336994-kube-api-access-gdghn\") pod \"nmstate-handler-d95cp\" (UID: \"eea5deed-9d07-45b2-b400-64b7c2336994\") " pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.820671 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/eea5deed-9d07-45b2-b400-64b7c2336994-ovs-socket\") pod \"nmstate-handler-d95cp\" (UID: \"eea5deed-9d07-45b2-b400-64b7c2336994\") " pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.820686 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/eea5deed-9d07-45b2-b400-64b7c2336994-nmstate-lock\") pod \"nmstate-handler-d95cp\" (UID: \"eea5deed-9d07-45b2-b400-64b7c2336994\") " pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:18 crc kubenswrapper[4751]: E0130 21:31:18.821537 4751 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 30 21:31:18 crc kubenswrapper[4751]: E0130 21:31:18.821661 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be191f8d-d8ce-4f29-95f1-1278c108ca11-tls-key-pair podName:be191f8d-d8ce-4f29-95f1-1278c108ca11 nodeName:}" failed. No retries permitted until 2026-01-30 21:31:19.321643917 +0000 UTC m=+1018.067466566 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/be191f8d-d8ce-4f29-95f1-1278c108ca11-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-7hqmv" (UID: "be191f8d-d8ce-4f29-95f1-1278c108ca11") : secret "openshift-nmstate-webhook" not found Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.849795 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv4c5\" (UniqueName: \"kubernetes.io/projected/be191f8d-d8ce-4f29-95f1-1278c108ca11-kube-api-access-cv4c5\") pod \"nmstate-webhook-8474b5b9d8-7hqmv\" (UID: \"be191f8d-d8ce-4f29-95f1-1278c108ca11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.855793 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7fdh\" (UniqueName: \"kubernetes.io/projected/f0ccd951-df7f-452f-b340-64fa7c9f9916-kube-api-access-p7fdh\") pod \"nmstate-metrics-54757c584b-rfrtx\" (UID: \"f0ccd951-df7f-452f-b340-64fa7c9f9916\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-rfrtx" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.922606 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2806dd41-f23b-466a-a187-4689685f6b86-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-kxkfz\" (UID: \"2806dd41-f23b-466a-a187-4689685f6b86\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.922927 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/eea5deed-9d07-45b2-b400-64b7c2336994-ovs-socket\") pod \"nmstate-handler-d95cp\" (UID: \"eea5deed-9d07-45b2-b400-64b7c2336994\") " pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.923072 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/eea5deed-9d07-45b2-b400-64b7c2336994-ovs-socket\") pod \"nmstate-handler-d95cp\" (UID: \"eea5deed-9d07-45b2-b400-64b7c2336994\") " pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.923180 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/eea5deed-9d07-45b2-b400-64b7c2336994-nmstate-lock\") pod \"nmstate-handler-d95cp\" (UID: \"eea5deed-9d07-45b2-b400-64b7c2336994\") " pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.923048 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/eea5deed-9d07-45b2-b400-64b7c2336994-nmstate-lock\") pod \"nmstate-handler-d95cp\" (UID: \"eea5deed-9d07-45b2-b400-64b7c2336994\") " pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.923917 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/eea5deed-9d07-45b2-b400-64b7c2336994-dbus-socket\") pod \"nmstate-handler-d95cp\" (UID: \"eea5deed-9d07-45b2-b400-64b7c2336994\") " pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.924118 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdghn\" (UniqueName: \"kubernetes.io/projected/eea5deed-9d07-45b2-b400-64b7c2336994-kube-api-access-gdghn\") pod \"nmstate-handler-d95cp\" (UID: \"eea5deed-9d07-45b2-b400-64b7c2336994\") " pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.924224 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zdcz\" (UniqueName: \"kubernetes.io/projected/2806dd41-f23b-466a-a187-4689685f6b86-kube-api-access-9zdcz\") pod \"nmstate-console-plugin-7754f76f8b-kxkfz\" (UID: \"2806dd41-f23b-466a-a187-4689685f6b86\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.924389 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2806dd41-f23b-466a-a187-4689685f6b86-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-kxkfz\" (UID: \"2806dd41-f23b-466a-a187-4689685f6b86\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.924699 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/eea5deed-9d07-45b2-b400-64b7c2336994-dbus-socket\") pod \"nmstate-handler-d95cp\" (UID: \"eea5deed-9d07-45b2-b400-64b7c2336994\") " pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.940676 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdghn\" (UniqueName: \"kubernetes.io/projected/eea5deed-9d07-45b2-b400-64b7c2336994-kube-api-access-gdghn\") pod \"nmstate-handler-d95cp\" (UID: \"eea5deed-9d07-45b2-b400-64b7c2336994\") " pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.960394 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-rfrtx" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.006238 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6b64b75d5d-kgc46"] Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.006523 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.007136 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.025496 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zdcz\" (UniqueName: \"kubernetes.io/projected/2806dd41-f23b-466a-a187-4689685f6b86-kube-api-access-9zdcz\") pod \"nmstate-console-plugin-7754f76f8b-kxkfz\" (UID: \"2806dd41-f23b-466a-a187-4689685f6b86\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.025778 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2806dd41-f23b-466a-a187-4689685f6b86-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-kxkfz\" (UID: \"2806dd41-f23b-466a-a187-4689685f6b86\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.025957 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2806dd41-f23b-466a-a187-4689685f6b86-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-kxkfz\" (UID: \"2806dd41-f23b-466a-a187-4689685f6b86\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.026356 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b64b75d5d-kgc46"] Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.026582 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2806dd41-f23b-466a-a187-4689685f6b86-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-kxkfz\" (UID: \"2806dd41-f23b-466a-a187-4689685f6b86\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.030304 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2806dd41-f23b-466a-a187-4689685f6b86-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-kxkfz\" (UID: \"2806dd41-f23b-466a-a187-4689685f6b86\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.059824 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zdcz\" (UniqueName: \"kubernetes.io/projected/2806dd41-f23b-466a-a187-4689685f6b86-kube-api-access-9zdcz\") pod \"nmstate-console-plugin-7754f76f8b-kxkfz\" (UID: \"2806dd41-f23b-466a-a187-4689685f6b86\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.112740 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.128624 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-oauth-serving-cert\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.128703 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-trusted-ca-bundle\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.128749 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-service-ca\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.128805 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf03a732-e32e-410a-ae17-1573a2854475-console-oauth-config\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.128990 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-console-config\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.129055 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn5wq\" (UniqueName: \"kubernetes.io/projected/bf03a732-e32e-410a-ae17-1573a2854475-kube-api-access-zn5wq\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.129085 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf03a732-e32e-410a-ae17-1573a2854475-console-serving-cert\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.232746 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn5wq\" (UniqueName: \"kubernetes.io/projected/bf03a732-e32e-410a-ae17-1573a2854475-kube-api-access-zn5wq\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.232820 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf03a732-e32e-410a-ae17-1573a2854475-console-serving-cert\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.232884 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-oauth-serving-cert\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.232920 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-trusted-ca-bundle\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.232963 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-service-ca\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.233000 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf03a732-e32e-410a-ae17-1573a2854475-console-oauth-config\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.233057 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-console-config\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.234185 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-console-config\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.234452 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-service-ca\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.234674 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-oauth-serving-cert\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.235590 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-trusted-ca-bundle\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.237910 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf03a732-e32e-410a-ae17-1573a2854475-console-oauth-config\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.239840 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf03a732-e32e-410a-ae17-1573a2854475-console-serving-cert\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.252885 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn5wq\" (UniqueName: \"kubernetes.io/projected/bf03a732-e32e-410a-ae17-1573a2854475-kube-api-access-zn5wq\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.334943 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/be191f8d-d8ce-4f29-95f1-1278c108ca11-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-7hqmv\" (UID: \"be191f8d-d8ce-4f29-95f1-1278c108ca11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.339712 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.339923 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/be191f8d-d8ce-4f29-95f1-1278c108ca11-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-7hqmv\" (UID: \"be191f8d-d8ce-4f29-95f1-1278c108ca11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.431490 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-rfrtx"] Jan 30 21:31:19 crc kubenswrapper[4751]: W0130 21:31:19.455884 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0ccd951_df7f_452f_b340_64fa7c9f9916.slice/crio-64e250cbfa65ab4a294b231d61ec3fc1303a35e92b6965b0a33549a8ef025c59 WatchSource:0}: Error finding container 64e250cbfa65ab4a294b231d61ec3fc1303a35e92b6965b0a33549a8ef025c59: Status 404 returned error can't find the container with id 64e250cbfa65ab4a294b231d61ec3fc1303a35e92b6965b0a33549a8ef025c59 Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.567455 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.643566 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz"] Jan 30 21:31:19 crc kubenswrapper[4751]: W0130 21:31:19.652035 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2806dd41_f23b_466a_a187_4689685f6b86.slice/crio-bee7bf0ae28ea8c607b3929519702ece404ce344fd7ff81c781f13045349b7ff WatchSource:0}: Error finding container bee7bf0ae28ea8c607b3929519702ece404ce344fd7ff81c781f13045349b7ff: Status 404 returned error can't find the container with id bee7bf0ae28ea8c607b3929519702ece404ce344fd7ff81c781f13045349b7ff Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.753308 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b64b75d5d-kgc46"] Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.843269 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b64b75d5d-kgc46" event={"ID":"bf03a732-e32e-410a-ae17-1573a2854475","Type":"ContainerStarted","Data":"940073f1b9050f0a93c1aa8e842c9477fdedfec5ed669f60a6eea0cf2c8dd11a"} Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.846733 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-d95cp" event={"ID":"eea5deed-9d07-45b2-b400-64b7c2336994","Type":"ContainerStarted","Data":"a10696e35c41b57c15b609887b59ba11d6d803ac7552b8134fddbadad31c7e30"} Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.847912 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" event={"ID":"2806dd41-f23b-466a-a187-4689685f6b86","Type":"ContainerStarted","Data":"bee7bf0ae28ea8c607b3929519702ece404ce344fd7ff81c781f13045349b7ff"} Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.853946 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-rfrtx" event={"ID":"f0ccd951-df7f-452f-b340-64fa7c9f9916","Type":"ContainerStarted","Data":"64e250cbfa65ab4a294b231d61ec3fc1303a35e92b6965b0a33549a8ef025c59"} Jan 30 21:31:20 crc kubenswrapper[4751]: I0130 21:31:20.038226 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv"] Jan 30 21:31:20 crc kubenswrapper[4751]: W0130 21:31:20.040550 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe191f8d_d8ce_4f29_95f1_1278c108ca11.slice/crio-da9814746878bea05aa4d008ade7ae0c0b6f70753c00d02f67e5f599e8f348b3 WatchSource:0}: Error finding container da9814746878bea05aa4d008ade7ae0c0b6f70753c00d02f67e5f599e8f348b3: Status 404 returned error can't find the container with id da9814746878bea05aa4d008ade7ae0c0b6f70753c00d02f67e5f599e8f348b3 Jan 30 21:31:20 crc kubenswrapper[4751]: I0130 21:31:20.865130 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b64b75d5d-kgc46" event={"ID":"bf03a732-e32e-410a-ae17-1573a2854475","Type":"ContainerStarted","Data":"bddd330cf13a903e94930cf7c65192196ece6d61e6ec543ac96c6b64e5e23194"} Jan 30 21:31:20 crc kubenswrapper[4751]: I0130 21:31:20.866791 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" event={"ID":"be191f8d-d8ce-4f29-95f1-1278c108ca11","Type":"ContainerStarted","Data":"da9814746878bea05aa4d008ade7ae0c0b6f70753c00d02f67e5f599e8f348b3"} Jan 30 21:31:20 crc kubenswrapper[4751]: I0130 21:31:20.888957 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6b64b75d5d-kgc46" podStartSLOduration=2.888935257 podStartE2EDuration="2.888935257s" podCreationTimestamp="2026-01-30 21:31:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:31:20.885232658 +0000 UTC m=+1019.631055317" watchObservedRunningTime="2026-01-30 21:31:20.888935257 +0000 UTC m=+1019.634757896" Jan 30 21:31:22 crc kubenswrapper[4751]: I0130 21:31:22.881396 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-rfrtx" event={"ID":"f0ccd951-df7f-452f-b340-64fa7c9f9916","Type":"ContainerStarted","Data":"a8b90bd4589936ff872d3f0c8eb3fb9ad768ef6fa487c15757ae24d0f92b3401"} Jan 30 21:31:22 crc kubenswrapper[4751]: I0130 21:31:22.883687 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" event={"ID":"be191f8d-d8ce-4f29-95f1-1278c108ca11","Type":"ContainerStarted","Data":"139d0b727b646bf2eb0a5bd5e8e1a92c889f740b95a64a90618c0cddc29c023c"} Jan 30 21:31:22 crc kubenswrapper[4751]: I0130 21:31:22.884059 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" Jan 30 21:31:22 crc kubenswrapper[4751]: I0130 21:31:22.897209 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" event={"ID":"2806dd41-f23b-466a-a187-4689685f6b86","Type":"ContainerStarted","Data":"b8b565c0f9ba4f8e1679a418d5449bc10cfc0a4320215c6463b168f0d1b9f2a1"} Jan 30 21:31:22 crc kubenswrapper[4751]: I0130 21:31:22.902721 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-d95cp" event={"ID":"eea5deed-9d07-45b2-b400-64b7c2336994","Type":"ContainerStarted","Data":"dbf6efa995485c6f8a2054af59b9e5ad99090e7eb3e5da670b4938812eac18b9"} Jan 30 21:31:22 crc kubenswrapper[4751]: I0130 21:31:22.908878 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:22 crc kubenswrapper[4751]: I0130 21:31:22.914214 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" podStartSLOduration=2.7968473879999998 podStartE2EDuration="4.913734178s" podCreationTimestamp="2026-01-30 21:31:18 +0000 UTC" firstStartedPulling="2026-01-30 21:31:20.043223411 +0000 UTC m=+1018.789046060" lastFinishedPulling="2026-01-30 21:31:22.160110201 +0000 UTC m=+1020.905932850" observedRunningTime="2026-01-30 21:31:22.907048259 +0000 UTC m=+1021.652870948" watchObservedRunningTime="2026-01-30 21:31:22.913734178 +0000 UTC m=+1021.659556837" Jan 30 21:31:22 crc kubenswrapper[4751]: I0130 21:31:22.966777 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-d95cp" podStartSLOduration=1.890659281 podStartE2EDuration="4.96675609s" podCreationTimestamp="2026-01-30 21:31:18 +0000 UTC" firstStartedPulling="2026-01-30 21:31:19.084146256 +0000 UTC m=+1017.829968905" lastFinishedPulling="2026-01-30 21:31:22.160243065 +0000 UTC m=+1020.906065714" observedRunningTime="2026-01-30 21:31:22.932411759 +0000 UTC m=+1021.678234418" watchObservedRunningTime="2026-01-30 21:31:22.96675609 +0000 UTC m=+1021.712578769" Jan 30 21:31:25 crc kubenswrapper[4751]: I0130 21:31:25.935506 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-rfrtx" event={"ID":"f0ccd951-df7f-452f-b340-64fa7c9f9916","Type":"ContainerStarted","Data":"ddb5f2b17b1acf96bf91aa7f2273f7c962f7ef1449c896f5b33eac87147bc4a7"} Jan 30 21:31:25 crc kubenswrapper[4751]: I0130 21:31:25.964999 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-rfrtx" podStartSLOduration=2.445839477 podStartE2EDuration="7.964975511s" podCreationTimestamp="2026-01-30 21:31:18 +0000 UTC" firstStartedPulling="2026-01-30 21:31:19.457717732 +0000 UTC m=+1018.203540381" lastFinishedPulling="2026-01-30 21:31:24.976853766 +0000 UTC m=+1023.722676415" observedRunningTime="2026-01-30 21:31:25.961064087 +0000 UTC m=+1024.706886766" watchObservedRunningTime="2026-01-30 21:31:25.964975511 +0000 UTC m=+1024.710798170" Jan 30 21:31:25 crc kubenswrapper[4751]: I0130 21:31:25.965284 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" podStartSLOduration=5.463725275 podStartE2EDuration="7.965274039s" podCreationTimestamp="2026-01-30 21:31:18 +0000 UTC" firstStartedPulling="2026-01-30 21:31:19.659884752 +0000 UTC m=+1018.405707401" lastFinishedPulling="2026-01-30 21:31:22.161433506 +0000 UTC m=+1020.907256165" observedRunningTime="2026-01-30 21:31:22.962073784 +0000 UTC m=+1021.707896493" watchObservedRunningTime="2026-01-30 21:31:25.965274039 +0000 UTC m=+1024.711096698" Jan 30 21:31:29 crc kubenswrapper[4751]: I0130 21:31:29.037108 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:29 crc kubenswrapper[4751]: I0130 21:31:29.340400 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:29 crc kubenswrapper[4751]: I0130 21:31:29.340491 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:29 crc kubenswrapper[4751]: I0130 21:31:29.347726 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:29 crc kubenswrapper[4751]: I0130 21:31:29.991833 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:30 crc kubenswrapper[4751]: I0130 21:31:30.071253 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-66d88878c9-plgvh"] Jan 30 21:31:39 crc kubenswrapper[4751]: I0130 21:31:39.909155 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" Jan 30 21:31:54 crc kubenswrapper[4751]: I0130 21:31:54.126604 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:31:54 crc kubenswrapper[4751]: I0130 21:31:54.127060 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.137437 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-66d88878c9-plgvh" podUID="6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" containerName="console" containerID="cri-o://b8da68d24d398052e970f15800d2f73b793cd125bb65462cf36be24e202afe18" gracePeriod=15 Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.291889 4751 patch_prober.go:28] interesting pod/console-66d88878c9-plgvh container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.78:8443/health\": dial tcp 10.217.0.78:8443: connect: connection refused" start-of-body= Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.291945 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-66d88878c9-plgvh" podUID="6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" containerName="console" probeResult="failure" output="Get \"https://10.217.0.78:8443/health\": dial tcp 10.217.0.78:8443: connect: connection refused" Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.954684 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-66d88878c9-plgvh_6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c/console/0.log" Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.955275 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.981332 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-config\") pod \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.981385 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-trusted-ca-bundle\") pod \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.981433 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkqrp\" (UniqueName: \"kubernetes.io/projected/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-kube-api-access-pkqrp\") pod \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.981505 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-serving-cert\") pod \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.981541 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-service-ca\") pod \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.981575 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-oauth-serving-cert\") pod \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.981626 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-oauth-config\") pod \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.982777 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-config" (OuterVolumeSpecName: "console-config") pod "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" (UID: "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.982871 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" (UID: "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.983105 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-service-ca" (OuterVolumeSpecName: "service-ca") pod "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" (UID: "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.983301 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" (UID: "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.989641 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" (UID: "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.989404 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-kube-api-access-pkqrp" (OuterVolumeSpecName: "kube-api-access-pkqrp") pod "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" (UID: "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c"). InnerVolumeSpecName "kube-api-access-pkqrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.990215 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" (UID: "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.083485 4751 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.083511 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.083520 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkqrp\" (UniqueName: \"kubernetes.io/projected/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-kube-api-access-pkqrp\") on node \"crc\" DevicePath \"\"" Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.083530 4751 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.083546 4751 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.083554 4751 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.083562 4751 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.225234 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-66d88878c9-plgvh_6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c/console/0.log" Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.225281 4751 generic.go:334] "Generic (PLEG): container finished" podID="6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" containerID="b8da68d24d398052e970f15800d2f73b793cd125bb65462cf36be24e202afe18" exitCode=2 Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.225305 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66d88878c9-plgvh" event={"ID":"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c","Type":"ContainerDied","Data":"b8da68d24d398052e970f15800d2f73b793cd125bb65462cf36be24e202afe18"} Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.225347 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66d88878c9-plgvh" event={"ID":"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c","Type":"ContainerDied","Data":"0e693c5eb441ca00dae4f66390c3ffc4f2ac93c82973ea2489bbd8ae4743393e"} Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.225363 4751 scope.go:117] "RemoveContainer" containerID="b8da68d24d398052e970f15800d2f73b793cd125bb65462cf36be24e202afe18" Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.225371 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.243497 4751 scope.go:117] "RemoveContainer" containerID="b8da68d24d398052e970f15800d2f73b793cd125bb65462cf36be24e202afe18" Jan 30 21:31:56 crc kubenswrapper[4751]: E0130 21:31:56.243880 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8da68d24d398052e970f15800d2f73b793cd125bb65462cf36be24e202afe18\": container with ID starting with b8da68d24d398052e970f15800d2f73b793cd125bb65462cf36be24e202afe18 not found: ID does not exist" containerID="b8da68d24d398052e970f15800d2f73b793cd125bb65462cf36be24e202afe18" Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.243909 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8da68d24d398052e970f15800d2f73b793cd125bb65462cf36be24e202afe18"} err="failed to get container status \"b8da68d24d398052e970f15800d2f73b793cd125bb65462cf36be24e202afe18\": rpc error: code = NotFound desc = could not find container \"b8da68d24d398052e970f15800d2f73b793cd125bb65462cf36be24e202afe18\": container with ID starting with b8da68d24d398052e970f15800d2f73b793cd125bb65462cf36be24e202afe18 not found: ID does not exist" Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.267505 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-66d88878c9-plgvh"] Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.277167 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-66d88878c9-plgvh"] Jan 30 21:31:57 crc kubenswrapper[4751]: I0130 21:31:57.839803 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw"] Jan 30 21:31:57 crc kubenswrapper[4751]: E0130 21:31:57.840373 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" containerName="console" Jan 30 21:31:57 crc kubenswrapper[4751]: I0130 21:31:57.840387 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" containerName="console" Jan 30 21:31:57 crc kubenswrapper[4751]: I0130 21:31:57.840574 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" containerName="console" Jan 30 21:31:57 crc kubenswrapper[4751]: I0130 21:31:57.841993 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" Jan 30 21:31:57 crc kubenswrapper[4751]: I0130 21:31:57.843475 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 21:31:57 crc kubenswrapper[4751]: I0130 21:31:57.853785 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw"] Jan 30 21:31:57 crc kubenswrapper[4751]: I0130 21:31:57.910897 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00263593-80af-4a40-a2c4-538f582434c4-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw\" (UID: \"00263593-80af-4a40-a2c4-538f582434c4\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" Jan 30 21:31:57 crc kubenswrapper[4751]: I0130 21:31:57.910983 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvfrp\" (UniqueName: \"kubernetes.io/projected/00263593-80af-4a40-a2c4-538f582434c4-kube-api-access-tvfrp\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw\" (UID: \"00263593-80af-4a40-a2c4-538f582434c4\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" Jan 30 21:31:57 crc kubenswrapper[4751]: I0130 21:31:57.911078 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00263593-80af-4a40-a2c4-538f582434c4-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw\" (UID: \"00263593-80af-4a40-a2c4-538f582434c4\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" Jan 30 21:31:57 crc kubenswrapper[4751]: I0130 21:31:57.985044 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" path="/var/lib/kubelet/pods/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c/volumes" Jan 30 21:31:58 crc kubenswrapper[4751]: I0130 21:31:58.012238 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvfrp\" (UniqueName: \"kubernetes.io/projected/00263593-80af-4a40-a2c4-538f582434c4-kube-api-access-tvfrp\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw\" (UID: \"00263593-80af-4a40-a2c4-538f582434c4\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" Jan 30 21:31:58 crc kubenswrapper[4751]: I0130 21:31:58.012384 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00263593-80af-4a40-a2c4-538f582434c4-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw\" (UID: \"00263593-80af-4a40-a2c4-538f582434c4\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" Jan 30 21:31:58 crc kubenswrapper[4751]: I0130 21:31:58.012440 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00263593-80af-4a40-a2c4-538f582434c4-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw\" (UID: \"00263593-80af-4a40-a2c4-538f582434c4\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" Jan 30 21:31:58 crc kubenswrapper[4751]: I0130 21:31:58.013015 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00263593-80af-4a40-a2c4-538f582434c4-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw\" (UID: \"00263593-80af-4a40-a2c4-538f582434c4\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" Jan 30 21:31:58 crc kubenswrapper[4751]: I0130 21:31:58.014732 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00263593-80af-4a40-a2c4-538f582434c4-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw\" (UID: \"00263593-80af-4a40-a2c4-538f582434c4\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" Jan 30 21:31:58 crc kubenswrapper[4751]: I0130 21:31:58.030891 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvfrp\" (UniqueName: \"kubernetes.io/projected/00263593-80af-4a40-a2c4-538f582434c4-kube-api-access-tvfrp\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw\" (UID: \"00263593-80af-4a40-a2c4-538f582434c4\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" Jan 30 21:31:58 crc kubenswrapper[4751]: I0130 21:31:58.158865 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" Jan 30 21:31:58 crc kubenswrapper[4751]: I0130 21:31:58.637164 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw"] Jan 30 21:31:59 crc kubenswrapper[4751]: I0130 21:31:59.258824 4751 generic.go:334] "Generic (PLEG): container finished" podID="00263593-80af-4a40-a2c4-538f582434c4" containerID="015e49363f1ef00af3377dd3fbdb14f07dac52e78791d5185c7de6fd9d1315a5" exitCode=0 Jan 30 21:31:59 crc kubenswrapper[4751]: I0130 21:31:59.258989 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" event={"ID":"00263593-80af-4a40-a2c4-538f582434c4","Type":"ContainerDied","Data":"015e49363f1ef00af3377dd3fbdb14f07dac52e78791d5185c7de6fd9d1315a5"} Jan 30 21:31:59 crc kubenswrapper[4751]: I0130 21:31:59.259315 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" event={"ID":"00263593-80af-4a40-a2c4-538f582434c4","Type":"ContainerStarted","Data":"75dd171c1aa81e591965bde29d70499232510555a67b3af35ad52ef7f57f165f"} Jan 30 21:32:01 crc kubenswrapper[4751]: I0130 21:32:01.278374 4751 generic.go:334] "Generic (PLEG): container finished" podID="00263593-80af-4a40-a2c4-538f582434c4" containerID="019d6769c771749968382795035cedd534fac5774d0f2cfe6375c3f286f46059" exitCode=0 Jan 30 21:32:01 crc kubenswrapper[4751]: I0130 21:32:01.278480 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" event={"ID":"00263593-80af-4a40-a2c4-538f582434c4","Type":"ContainerDied","Data":"019d6769c771749968382795035cedd534fac5774d0f2cfe6375c3f286f46059"} Jan 30 21:32:02 crc kubenswrapper[4751]: I0130 21:32:02.295281 4751 generic.go:334] "Generic (PLEG): container finished" podID="00263593-80af-4a40-a2c4-538f582434c4" containerID="45170160e5f46812feb877131837b613df8254eca544bf8b4de018c57971a777" exitCode=0 Jan 30 21:32:02 crc kubenswrapper[4751]: I0130 21:32:02.295373 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" event={"ID":"00263593-80af-4a40-a2c4-538f582434c4","Type":"ContainerDied","Data":"45170160e5f46812feb877131837b613df8254eca544bf8b4de018c57971a777"} Jan 30 21:32:03 crc kubenswrapper[4751]: I0130 21:32:03.656177 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" Jan 30 21:32:03 crc kubenswrapper[4751]: I0130 21:32:03.712452 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvfrp\" (UniqueName: \"kubernetes.io/projected/00263593-80af-4a40-a2c4-538f582434c4-kube-api-access-tvfrp\") pod \"00263593-80af-4a40-a2c4-538f582434c4\" (UID: \"00263593-80af-4a40-a2c4-538f582434c4\") " Jan 30 21:32:03 crc kubenswrapper[4751]: I0130 21:32:03.712574 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00263593-80af-4a40-a2c4-538f582434c4-bundle\") pod \"00263593-80af-4a40-a2c4-538f582434c4\" (UID: \"00263593-80af-4a40-a2c4-538f582434c4\") " Jan 30 21:32:03 crc kubenswrapper[4751]: I0130 21:32:03.712661 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00263593-80af-4a40-a2c4-538f582434c4-util\") pod \"00263593-80af-4a40-a2c4-538f582434c4\" (UID: \"00263593-80af-4a40-a2c4-538f582434c4\") " Jan 30 21:32:03 crc kubenswrapper[4751]: I0130 21:32:03.714269 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00263593-80af-4a40-a2c4-538f582434c4-bundle" (OuterVolumeSpecName: "bundle") pod "00263593-80af-4a40-a2c4-538f582434c4" (UID: "00263593-80af-4a40-a2c4-538f582434c4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:32:03 crc kubenswrapper[4751]: I0130 21:32:03.718530 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00263593-80af-4a40-a2c4-538f582434c4-kube-api-access-tvfrp" (OuterVolumeSpecName: "kube-api-access-tvfrp") pod "00263593-80af-4a40-a2c4-538f582434c4" (UID: "00263593-80af-4a40-a2c4-538f582434c4"). InnerVolumeSpecName "kube-api-access-tvfrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:32:03 crc kubenswrapper[4751]: I0130 21:32:03.728689 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00263593-80af-4a40-a2c4-538f582434c4-util" (OuterVolumeSpecName: "util") pod "00263593-80af-4a40-a2c4-538f582434c4" (UID: "00263593-80af-4a40-a2c4-538f582434c4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:32:03 crc kubenswrapper[4751]: I0130 21:32:03.814245 4751 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00263593-80af-4a40-a2c4-538f582434c4-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:32:03 crc kubenswrapper[4751]: I0130 21:32:03.814354 4751 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00263593-80af-4a40-a2c4-538f582434c4-util\") on node \"crc\" DevicePath \"\"" Jan 30 21:32:03 crc kubenswrapper[4751]: I0130 21:32:03.814375 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvfrp\" (UniqueName: \"kubernetes.io/projected/00263593-80af-4a40-a2c4-538f582434c4-kube-api-access-tvfrp\") on node \"crc\" DevicePath \"\"" Jan 30 21:32:04 crc kubenswrapper[4751]: I0130 21:32:04.317760 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" event={"ID":"00263593-80af-4a40-a2c4-538f582434c4","Type":"ContainerDied","Data":"75dd171c1aa81e591965bde29d70499232510555a67b3af35ad52ef7f57f165f"} Jan 30 21:32:04 crc kubenswrapper[4751]: I0130 21:32:04.317796 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75dd171c1aa81e591965bde29d70499232510555a67b3af35ad52ef7f57f165f" Jan 30 21:32:04 crc kubenswrapper[4751]: I0130 21:32:04.317852 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.842224 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4"] Jan 30 21:32:12 crc kubenswrapper[4751]: E0130 21:32:12.842886 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00263593-80af-4a40-a2c4-538f582434c4" containerName="pull" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.842898 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="00263593-80af-4a40-a2c4-538f582434c4" containerName="pull" Jan 30 21:32:12 crc kubenswrapper[4751]: E0130 21:32:12.842910 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00263593-80af-4a40-a2c4-538f582434c4" containerName="util" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.842916 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="00263593-80af-4a40-a2c4-538f582434c4" containerName="util" Jan 30 21:32:12 crc kubenswrapper[4751]: E0130 21:32:12.842938 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00263593-80af-4a40-a2c4-538f582434c4" containerName="extract" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.842944 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="00263593-80af-4a40-a2c4-538f582434c4" containerName="extract" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.843071 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="00263593-80af-4a40-a2c4-538f582434c4" containerName="extract" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.843572 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.845227 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.845422 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.845427 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.845605 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.847617 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-5wbs4" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.860897 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4"] Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.910345 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc5fd\" (UniqueName: \"kubernetes.io/projected/088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a-kube-api-access-bc5fd\") pod \"metallb-operator-controller-manager-6697664f96-w8tr4\" (UID: \"088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a\") " pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.910418 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a-apiservice-cert\") pod \"metallb-operator-controller-manager-6697664f96-w8tr4\" (UID: \"088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a\") " pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.910466 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a-webhook-cert\") pod \"metallb-operator-controller-manager-6697664f96-w8tr4\" (UID: \"088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a\") " pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.012314 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a-apiservice-cert\") pod \"metallb-operator-controller-manager-6697664f96-w8tr4\" (UID: \"088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a\") " pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.012438 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a-webhook-cert\") pod \"metallb-operator-controller-manager-6697664f96-w8tr4\" (UID: \"088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a\") " pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.012488 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc5fd\" (UniqueName: \"kubernetes.io/projected/088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a-kube-api-access-bc5fd\") pod \"metallb-operator-controller-manager-6697664f96-w8tr4\" (UID: \"088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a\") " pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.019813 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a-webhook-cert\") pod \"metallb-operator-controller-manager-6697664f96-w8tr4\" (UID: \"088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a\") " pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.019851 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a-apiservice-cert\") pod \"metallb-operator-controller-manager-6697664f96-w8tr4\" (UID: \"088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a\") " pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.030084 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc5fd\" (UniqueName: \"kubernetes.io/projected/088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a-kube-api-access-bc5fd\") pod \"metallb-operator-controller-manager-6697664f96-w8tr4\" (UID: \"088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a\") " pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.158278 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.175133 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-597477f4b5-q868h"] Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.176190 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.178371 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.178544 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-5tqht" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.179824 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.190548 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-597477f4b5-q868h"] Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.317576 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q24d2\" (UniqueName: \"kubernetes.io/projected/61545af5-1133-4922-a477-9155212b642c-kube-api-access-q24d2\") pod \"metallb-operator-webhook-server-597477f4b5-q868h\" (UID: \"61545af5-1133-4922-a477-9155212b642c\") " pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.317622 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61545af5-1133-4922-a477-9155212b642c-webhook-cert\") pod \"metallb-operator-webhook-server-597477f4b5-q868h\" (UID: \"61545af5-1133-4922-a477-9155212b642c\") " pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.317698 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61545af5-1133-4922-a477-9155212b642c-apiservice-cert\") pod \"metallb-operator-webhook-server-597477f4b5-q868h\" (UID: \"61545af5-1133-4922-a477-9155212b642c\") " pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.418783 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61545af5-1133-4922-a477-9155212b642c-apiservice-cert\") pod \"metallb-operator-webhook-server-597477f4b5-q868h\" (UID: \"61545af5-1133-4922-a477-9155212b642c\") " pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.419720 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q24d2\" (UniqueName: \"kubernetes.io/projected/61545af5-1133-4922-a477-9155212b642c-kube-api-access-q24d2\") pod \"metallb-operator-webhook-server-597477f4b5-q868h\" (UID: \"61545af5-1133-4922-a477-9155212b642c\") " pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.419749 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61545af5-1133-4922-a477-9155212b642c-webhook-cert\") pod \"metallb-operator-webhook-server-597477f4b5-q868h\" (UID: \"61545af5-1133-4922-a477-9155212b642c\") " pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.425112 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61545af5-1133-4922-a477-9155212b642c-apiservice-cert\") pod \"metallb-operator-webhook-server-597477f4b5-q868h\" (UID: \"61545af5-1133-4922-a477-9155212b642c\") " pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.439016 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61545af5-1133-4922-a477-9155212b642c-webhook-cert\") pod \"metallb-operator-webhook-server-597477f4b5-q868h\" (UID: \"61545af5-1133-4922-a477-9155212b642c\") " pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.447646 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q24d2\" (UniqueName: \"kubernetes.io/projected/61545af5-1133-4922-a477-9155212b642c-kube-api-access-q24d2\") pod \"metallb-operator-webhook-server-597477f4b5-q868h\" (UID: \"61545af5-1133-4922-a477-9155212b642c\") " pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.537344 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.591894 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4"] Jan 30 21:32:13 crc kubenswrapper[4751]: W0130 21:32:13.603345 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod088ac2b9_a8fd_4aa9_854d_a62a9ecd5e9a.slice/crio-997b9ba7dfef1ba0677237f5fe11e44c94ac60aceb28edfd5b3edd8254ea7f81 WatchSource:0}: Error finding container 997b9ba7dfef1ba0677237f5fe11e44c94ac60aceb28edfd5b3edd8254ea7f81: Status 404 returned error can't find the container with id 997b9ba7dfef1ba0677237f5fe11e44c94ac60aceb28edfd5b3edd8254ea7f81 Jan 30 21:32:13 crc kubenswrapper[4751]: W0130 21:32:13.983005 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61545af5_1133_4922_a477_9155212b642c.slice/crio-d4dc3103cb7694a11d1555e5af5a7e0d06278011634b90d902da353d081b7887 WatchSource:0}: Error finding container d4dc3103cb7694a11d1555e5af5a7e0d06278011634b90d902da353d081b7887: Status 404 returned error can't find the container with id d4dc3103cb7694a11d1555e5af5a7e0d06278011634b90d902da353d081b7887 Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.998283 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-597477f4b5-q868h"] Jan 30 21:32:14 crc kubenswrapper[4751]: I0130 21:32:14.403662 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" event={"ID":"088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a","Type":"ContainerStarted","Data":"997b9ba7dfef1ba0677237f5fe11e44c94ac60aceb28edfd5b3edd8254ea7f81"} Jan 30 21:32:14 crc kubenswrapper[4751]: I0130 21:32:14.405404 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" event={"ID":"61545af5-1133-4922-a477-9155212b642c","Type":"ContainerStarted","Data":"d4dc3103cb7694a11d1555e5af5a7e0d06278011634b90d902da353d081b7887"} Jan 30 21:32:17 crc kubenswrapper[4751]: I0130 21:32:17.426031 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" event={"ID":"088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a","Type":"ContainerStarted","Data":"cf5488430e6bdbd56699cb2651d484f8e9cd245fb98da74f5eaebfbef4021e83"} Jan 30 21:32:17 crc kubenswrapper[4751]: I0130 21:32:17.426487 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" Jan 30 21:32:17 crc kubenswrapper[4751]: I0130 21:32:17.446792 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" podStartSLOduration=2.4978778569999998 podStartE2EDuration="5.446771788s" podCreationTimestamp="2026-01-30 21:32:12 +0000 UTC" firstStartedPulling="2026-01-30 21:32:13.607963874 +0000 UTC m=+1072.353786523" lastFinishedPulling="2026-01-30 21:32:16.556857805 +0000 UTC m=+1075.302680454" observedRunningTime="2026-01-30 21:32:17.445515105 +0000 UTC m=+1076.191337794" watchObservedRunningTime="2026-01-30 21:32:17.446771788 +0000 UTC m=+1076.192594437" Jan 30 21:32:19 crc kubenswrapper[4751]: I0130 21:32:19.444390 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" event={"ID":"61545af5-1133-4922-a477-9155212b642c","Type":"ContainerStarted","Data":"d3ab546e5b4a99040f005e03e377e15b9982e88bb5c9536e1304ec50c8087d9c"} Jan 30 21:32:19 crc kubenswrapper[4751]: I0130 21:32:19.445476 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" Jan 30 21:32:19 crc kubenswrapper[4751]: I0130 21:32:19.460658 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" podStartSLOduration=2.134988968 podStartE2EDuration="6.460641987s" podCreationTimestamp="2026-01-30 21:32:13 +0000 UTC" firstStartedPulling="2026-01-30 21:32:13.993513524 +0000 UTC m=+1072.739336183" lastFinishedPulling="2026-01-30 21:32:18.319166553 +0000 UTC m=+1077.064989202" observedRunningTime="2026-01-30 21:32:19.459362243 +0000 UTC m=+1078.205184892" watchObservedRunningTime="2026-01-30 21:32:19.460641987 +0000 UTC m=+1078.206464636" Jan 30 21:32:22 crc kubenswrapper[4751]: I0130 21:32:22.658555 4751 scope.go:117] "RemoveContainer" containerID="c37392c9c28591d30af6fa13864c5cce74c1af8be4cc91616fe120071a372d74" Jan 30 21:32:22 crc kubenswrapper[4751]: I0130 21:32:22.696533 4751 scope.go:117] "RemoveContainer" containerID="8fca4ce58dcc1f6c42dc0ef9782db856f25df74c010a55261aa5d6ba4308f0b1" Jan 30 21:32:22 crc kubenswrapper[4751]: I0130 21:32:22.719671 4751 scope.go:117] "RemoveContainer" containerID="8d83fa7db634ce7a7858c27562ccdf062d9dea0a838bb5aacc88523290613dfc" Jan 30 21:32:24 crc kubenswrapper[4751]: I0130 21:32:24.126601 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:32:24 crc kubenswrapper[4751]: I0130 21:32:24.126866 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:32:33 crc kubenswrapper[4751]: I0130 21:32:33.543595 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.162686 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.844365 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-9zjh6"] Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.849460 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.854101 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.854467 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.854693 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-b78r7" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.861033 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97"] Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.862266 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.865027 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.876347 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97"] Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.936048 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f8544a86-1b67-4c2e-9b56-ca708c47b4e8-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-2zl97\" (UID: \"f8544a86-1b67-4c2e-9b56-ca708c47b4e8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.936189 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-frr-startup\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.936336 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-metrics-certs\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.936355 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-frr-sockets\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.936820 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfd58\" (UniqueName: \"kubernetes.io/projected/f8544a86-1b67-4c2e-9b56-ca708c47b4e8-kube-api-access-wfd58\") pod \"frr-k8s-webhook-server-7df86c4f6c-2zl97\" (UID: \"f8544a86-1b67-4c2e-9b56-ca708c47b4e8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.936851 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-reloader\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.936875 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-frr-conf\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.936896 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-metrics\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.936913 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8rm4\" (UniqueName: \"kubernetes.io/projected/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-kube-api-access-n8rm4\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.964919 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-zqbmp"] Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.974207 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-zqbmp" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.978661 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.978894 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-n966j" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.979075 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.979091 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.005944 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-p8nst"] Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.010460 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-p8nst" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.017966 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.024222 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-p8nst"] Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.038753 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e9fc7f0b-0bab-4435-82d8-b78841d64687-memberlist\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.038853 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfd58\" (UniqueName: \"kubernetes.io/projected/f8544a86-1b67-4c2e-9b56-ca708c47b4e8-kube-api-access-wfd58\") pod \"frr-k8s-webhook-server-7df86c4f6c-2zl97\" (UID: \"f8544a86-1b67-4c2e-9b56-ca708c47b4e8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.038887 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-reloader\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.038917 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-frr-conf\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.038943 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-metrics\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.038968 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8rm4\" (UniqueName: \"kubernetes.io/projected/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-kube-api-access-n8rm4\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.038993 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztpsr\" (UniqueName: \"kubernetes.io/projected/e9fc7f0b-0bab-4435-82d8-b78841d64687-kube-api-access-ztpsr\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.039018 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e9fc7f0b-0bab-4435-82d8-b78841d64687-metallb-excludel2\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.039082 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f8544a86-1b67-4c2e-9b56-ca708c47b4e8-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-2zl97\" (UID: \"f8544a86-1b67-4c2e-9b56-ca708c47b4e8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.039108 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-frr-startup\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.039149 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9fc7f0b-0bab-4435-82d8-b78841d64687-metrics-certs\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.039181 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-metrics-certs\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.039202 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-frr-sockets\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.039700 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-frr-sockets\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: E0130 21:32:54.039845 4751 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 30 21:32:54 crc kubenswrapper[4751]: E0130 21:32:54.039907 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8544a86-1b67-4c2e-9b56-ca708c47b4e8-cert podName:f8544a86-1b67-4c2e-9b56-ca708c47b4e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:32:54.539885069 +0000 UTC m=+1113.285707718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f8544a86-1b67-4c2e-9b56-ca708c47b4e8-cert") pod "frr-k8s-webhook-server-7df86c4f6c-2zl97" (UID: "f8544a86-1b67-4c2e-9b56-ca708c47b4e8") : secret "frr-k8s-webhook-server-cert" not found Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.040771 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-reloader\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.041055 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-frr-conf\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.041522 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-metrics\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.041993 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-frr-startup\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.059463 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-metrics-certs\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.080164 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfd58\" (UniqueName: \"kubernetes.io/projected/f8544a86-1b67-4c2e-9b56-ca708c47b4e8-kube-api-access-wfd58\") pod \"frr-k8s-webhook-server-7df86c4f6c-2zl97\" (UID: \"f8544a86-1b67-4c2e-9b56-ca708c47b4e8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.085800 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8rm4\" (UniqueName: \"kubernetes.io/projected/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-kube-api-access-n8rm4\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.127005 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.127071 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.127123 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.127862 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4084bd2e19ec539ac0bc075f3b6a34007de80a7e632827590212d241d8cb0234"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.127935 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://4084bd2e19ec539ac0bc075f3b6a34007de80a7e632827590212d241d8cb0234" gracePeriod=600 Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.140419 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztpsr\" (UniqueName: \"kubernetes.io/projected/e9fc7f0b-0bab-4435-82d8-b78841d64687-kube-api-access-ztpsr\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.140481 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e9fc7f0b-0bab-4435-82d8-b78841d64687-metallb-excludel2\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.140569 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/41e79790-830a-48bb-93b6-dd55dc050acf-cert\") pod \"controller-6968d8fdc4-p8nst\" (UID: \"41e79790-830a-48bb-93b6-dd55dc050acf\") " pod="metallb-system/controller-6968d8fdc4-p8nst" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.140602 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9fc7f0b-0bab-4435-82d8-b78841d64687-metrics-certs\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.140646 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zdw8\" (UniqueName: \"kubernetes.io/projected/41e79790-830a-48bb-93b6-dd55dc050acf-kube-api-access-5zdw8\") pod \"controller-6968d8fdc4-p8nst\" (UID: \"41e79790-830a-48bb-93b6-dd55dc050acf\") " pod="metallb-system/controller-6968d8fdc4-p8nst" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.140663 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e9fc7f0b-0bab-4435-82d8-b78841d64687-memberlist\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:54 crc kubenswrapper[4751]: E0130 21:32:54.140762 4751 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 21:32:54 crc kubenswrapper[4751]: E0130 21:32:54.140811 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9fc7f0b-0bab-4435-82d8-b78841d64687-memberlist podName:e9fc7f0b-0bab-4435-82d8-b78841d64687 nodeName:}" failed. No retries permitted until 2026-01-30 21:32:54.640794513 +0000 UTC m=+1113.386617162 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e9fc7f0b-0bab-4435-82d8-b78841d64687-memberlist") pod "speaker-zqbmp" (UID: "e9fc7f0b-0bab-4435-82d8-b78841d64687") : secret "metallb-memberlist" not found Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.140828 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41e79790-830a-48bb-93b6-dd55dc050acf-metrics-certs\") pod \"controller-6968d8fdc4-p8nst\" (UID: \"41e79790-830a-48bb-93b6-dd55dc050acf\") " pod="metallb-system/controller-6968d8fdc4-p8nst" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.141402 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e9fc7f0b-0bab-4435-82d8-b78841d64687-metallb-excludel2\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.145913 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9fc7f0b-0bab-4435-82d8-b78841d64687-metrics-certs\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.171770 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztpsr\" (UniqueName: \"kubernetes.io/projected/e9fc7f0b-0bab-4435-82d8-b78841d64687-kube-api-access-ztpsr\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.183032 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.241896 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41e79790-830a-48bb-93b6-dd55dc050acf-metrics-certs\") pod \"controller-6968d8fdc4-p8nst\" (UID: \"41e79790-830a-48bb-93b6-dd55dc050acf\") " pod="metallb-system/controller-6968d8fdc4-p8nst" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.242030 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/41e79790-830a-48bb-93b6-dd55dc050acf-cert\") pod \"controller-6968d8fdc4-p8nst\" (UID: \"41e79790-830a-48bb-93b6-dd55dc050acf\") " pod="metallb-system/controller-6968d8fdc4-p8nst" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.242092 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zdw8\" (UniqueName: \"kubernetes.io/projected/41e79790-830a-48bb-93b6-dd55dc050acf-kube-api-access-5zdw8\") pod \"controller-6968d8fdc4-p8nst\" (UID: \"41e79790-830a-48bb-93b6-dd55dc050acf\") " pod="metallb-system/controller-6968d8fdc4-p8nst" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.244927 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.245778 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41e79790-830a-48bb-93b6-dd55dc050acf-metrics-certs\") pod \"controller-6968d8fdc4-p8nst\" (UID: \"41e79790-830a-48bb-93b6-dd55dc050acf\") " pod="metallb-system/controller-6968d8fdc4-p8nst" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.257756 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/41e79790-830a-48bb-93b6-dd55dc050acf-cert\") pod \"controller-6968d8fdc4-p8nst\" (UID: \"41e79790-830a-48bb-93b6-dd55dc050acf\") " pod="metallb-system/controller-6968d8fdc4-p8nst" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.287250 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zdw8\" (UniqueName: \"kubernetes.io/projected/41e79790-830a-48bb-93b6-dd55dc050acf-kube-api-access-5zdw8\") pod \"controller-6968d8fdc4-p8nst\" (UID: \"41e79790-830a-48bb-93b6-dd55dc050acf\") " pod="metallb-system/controller-6968d8fdc4-p8nst" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.385734 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-p8nst" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.546734 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f8544a86-1b67-4c2e-9b56-ca708c47b4e8-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-2zl97\" (UID: \"f8544a86-1b67-4c2e-9b56-ca708c47b4e8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.554766 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f8544a86-1b67-4c2e-9b56-ca708c47b4e8-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-2zl97\" (UID: \"f8544a86-1b67-4c2e-9b56-ca708c47b4e8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.648861 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e9fc7f0b-0bab-4435-82d8-b78841d64687-memberlist\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:54 crc kubenswrapper[4751]: E0130 21:32:54.649085 4751 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 21:32:54 crc kubenswrapper[4751]: E0130 21:32:54.649185 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9fc7f0b-0bab-4435-82d8-b78841d64687-memberlist podName:e9fc7f0b-0bab-4435-82d8-b78841d64687 nodeName:}" failed. No retries permitted until 2026-01-30 21:32:55.649159183 +0000 UTC m=+1114.394981832 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e9fc7f0b-0bab-4435-82d8-b78841d64687-memberlist") pod "speaker-zqbmp" (UID: "e9fc7f0b-0bab-4435-82d8-b78841d64687") : secret "metallb-memberlist" not found Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.732236 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="4084bd2e19ec539ac0bc075f3b6a34007de80a7e632827590212d241d8cb0234" exitCode=0 Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.732353 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"4084bd2e19ec539ac0bc075f3b6a34007de80a7e632827590212d241d8cb0234"} Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.732427 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"589b659983c64eaeb9431668de4131b84f85d7d4aaf79c3e0b75a24b0812e09e"} Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.732457 4751 scope.go:117] "RemoveContainer" containerID="ad350159473538b7294a1cb17b3c91bed6ccae12ecd005a2dc1c208ac650225b" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.736077 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9zjh6" event={"ID":"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4","Type":"ContainerStarted","Data":"4ad7da4df8cbf53720cc60439b0d1a9d6e905acdc358824f918e94585616669b"} Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.800517 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.824493 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-p8nst"] Jan 30 21:32:54 crc kubenswrapper[4751]: W0130 21:32:54.829921 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41e79790_830a_48bb_93b6_dd55dc050acf.slice/crio-aae88e56959a188e9891d429b5ece193865721750280788b7eb0aef6304365bb WatchSource:0}: Error finding container aae88e56959a188e9891d429b5ece193865721750280788b7eb0aef6304365bb: Status 404 returned error can't find the container with id aae88e56959a188e9891d429b5ece193865721750280788b7eb0aef6304365bb Jan 30 21:32:55 crc kubenswrapper[4751]: I0130 21:32:55.250965 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97"] Jan 30 21:32:55 crc kubenswrapper[4751]: I0130 21:32:55.669318 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e9fc7f0b-0bab-4435-82d8-b78841d64687-memberlist\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:55 crc kubenswrapper[4751]: I0130 21:32:55.684624 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e9fc7f0b-0bab-4435-82d8-b78841d64687-memberlist\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:55 crc kubenswrapper[4751]: I0130 21:32:55.747187 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-p8nst" event={"ID":"41e79790-830a-48bb-93b6-dd55dc050acf","Type":"ContainerStarted","Data":"2f0cd3932ce00118212f2632f6400b2bf6938a51c21ebbd44cb2f5ccc96a28c3"} Jan 30 21:32:55 crc kubenswrapper[4751]: I0130 21:32:55.747249 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-p8nst" event={"ID":"41e79790-830a-48bb-93b6-dd55dc050acf","Type":"ContainerStarted","Data":"4c7f186f59cf5dc402bf48f41ddf40e567aea195b575b317598bd105c7d3597c"} Jan 30 21:32:55 crc kubenswrapper[4751]: I0130 21:32:55.747263 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-p8nst" event={"ID":"41e79790-830a-48bb-93b6-dd55dc050acf","Type":"ContainerStarted","Data":"aae88e56959a188e9891d429b5ece193865721750280788b7eb0aef6304365bb"} Jan 30 21:32:55 crc kubenswrapper[4751]: I0130 21:32:55.747281 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-p8nst" Jan 30 21:32:55 crc kubenswrapper[4751]: I0130 21:32:55.748580 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" event={"ID":"f8544a86-1b67-4c2e-9b56-ca708c47b4e8","Type":"ContainerStarted","Data":"fad93262aeea3b4a3bf8ab24159939ed43c4fb1478eb2f0114a69d67832bfc7b"} Jan 30 21:32:55 crc kubenswrapper[4751]: I0130 21:32:55.766862 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-p8nst" podStartSLOduration=2.76683709 podStartE2EDuration="2.76683709s" podCreationTimestamp="2026-01-30 21:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:32:55.759939215 +0000 UTC m=+1114.505761874" watchObservedRunningTime="2026-01-30 21:32:55.76683709 +0000 UTC m=+1114.512659739" Jan 30 21:32:55 crc kubenswrapper[4751]: I0130 21:32:55.799823 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-zqbmp" Jan 30 21:32:55 crc kubenswrapper[4751]: W0130 21:32:55.824155 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9fc7f0b_0bab_4435_82d8_b78841d64687.slice/crio-422233dc75e0ca29b1f929c27aaf33af34757d80dd89b02ec3b2e14e27700a80 WatchSource:0}: Error finding container 422233dc75e0ca29b1f929c27aaf33af34757d80dd89b02ec3b2e14e27700a80: Status 404 returned error can't find the container with id 422233dc75e0ca29b1f929c27aaf33af34757d80dd89b02ec3b2e14e27700a80 Jan 30 21:32:56 crc kubenswrapper[4751]: I0130 21:32:56.763807 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-zqbmp" event={"ID":"e9fc7f0b-0bab-4435-82d8-b78841d64687","Type":"ContainerStarted","Data":"8d9587011defe44b1f694231d492bf571a8949856ff0c8c6fa191fcbba802a4f"} Jan 30 21:32:56 crc kubenswrapper[4751]: I0130 21:32:56.765108 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-zqbmp" event={"ID":"e9fc7f0b-0bab-4435-82d8-b78841d64687","Type":"ContainerStarted","Data":"219a584ce3a579f7f106484b76dd6ea3cbaa96058dc67a146891459a59d84c53"} Jan 30 21:32:56 crc kubenswrapper[4751]: I0130 21:32:56.765212 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-zqbmp" event={"ID":"e9fc7f0b-0bab-4435-82d8-b78841d64687","Type":"ContainerStarted","Data":"422233dc75e0ca29b1f929c27aaf33af34757d80dd89b02ec3b2e14e27700a80"} Jan 30 21:32:56 crc kubenswrapper[4751]: I0130 21:32:56.765460 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-zqbmp" Jan 30 21:32:56 crc kubenswrapper[4751]: I0130 21:32:56.793442 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-zqbmp" podStartSLOduration=3.793422166 podStartE2EDuration="3.793422166s" podCreationTimestamp="2026-01-30 21:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:32:56.789341346 +0000 UTC m=+1115.535164015" watchObservedRunningTime="2026-01-30 21:32:56.793422166 +0000 UTC m=+1115.539244815" Jan 30 21:33:02 crc kubenswrapper[4751]: I0130 21:33:02.823972 4751 generic.go:334] "Generic (PLEG): container finished" podID="e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4" containerID="cd3828ad15ab97c99197cec86fdc99e5269d1569ca2cbd900a865fcd55d21898" exitCode=0 Jan 30 21:33:02 crc kubenswrapper[4751]: I0130 21:33:02.824095 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9zjh6" event={"ID":"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4","Type":"ContainerDied","Data":"cd3828ad15ab97c99197cec86fdc99e5269d1569ca2cbd900a865fcd55d21898"} Jan 30 21:33:02 crc kubenswrapper[4751]: I0130 21:33:02.829903 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" event={"ID":"f8544a86-1b67-4c2e-9b56-ca708c47b4e8","Type":"ContainerStarted","Data":"68aacfd2fb520af61841a9dec205ceccbddedc9f6aa718869bde323dd8c55696"} Jan 30 21:33:02 crc kubenswrapper[4751]: I0130 21:33:02.830171 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" Jan 30 21:33:02 crc kubenswrapper[4751]: I0130 21:33:02.876442 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" podStartSLOduration=2.636285471 podStartE2EDuration="9.876424279s" podCreationTimestamp="2026-01-30 21:32:53 +0000 UTC" firstStartedPulling="2026-01-30 21:32:55.258799308 +0000 UTC m=+1114.004621957" lastFinishedPulling="2026-01-30 21:33:02.498938106 +0000 UTC m=+1121.244760765" observedRunningTime="2026-01-30 21:33:02.872975617 +0000 UTC m=+1121.618798266" watchObservedRunningTime="2026-01-30 21:33:02.876424279 +0000 UTC m=+1121.622246928" Jan 30 21:33:03 crc kubenswrapper[4751]: I0130 21:33:03.840237 4751 generic.go:334] "Generic (PLEG): container finished" podID="e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4" containerID="dabac1b2cefa268d14d21955b6c95f64e6052e5ef4033fb81eaa3dda2b12c5df" exitCode=0 Jan 30 21:33:03 crc kubenswrapper[4751]: I0130 21:33:03.840369 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9zjh6" event={"ID":"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4","Type":"ContainerDied","Data":"dabac1b2cefa268d14d21955b6c95f64e6052e5ef4033fb81eaa3dda2b12c5df"} Jan 30 21:33:04 crc kubenswrapper[4751]: I0130 21:33:04.394637 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-p8nst" Jan 30 21:33:04 crc kubenswrapper[4751]: I0130 21:33:04.853016 4751 generic.go:334] "Generic (PLEG): container finished" podID="e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4" containerID="5b69ce330fd4b3a70263367badfaa303248925286b00c1946afa56f219a59fa8" exitCode=0 Jan 30 21:33:04 crc kubenswrapper[4751]: I0130 21:33:04.853099 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9zjh6" event={"ID":"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4","Type":"ContainerDied","Data":"5b69ce330fd4b3a70263367badfaa303248925286b00c1946afa56f219a59fa8"} Jan 30 21:33:05 crc kubenswrapper[4751]: I0130 21:33:05.865805 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9zjh6" event={"ID":"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4","Type":"ContainerStarted","Data":"6199f52ebf105d57d4329cc5a413b7b8298d1c59ded29398a6a091c00fbc850b"} Jan 30 21:33:05 crc kubenswrapper[4751]: I0130 21:33:05.866120 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9zjh6" event={"ID":"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4","Type":"ContainerStarted","Data":"b9e4c85dceb8dcae3a6478781dfbdba2f255351604d0652c6adf308df71bf576"} Jan 30 21:33:06 crc kubenswrapper[4751]: I0130 21:33:06.878137 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9zjh6" event={"ID":"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4","Type":"ContainerStarted","Data":"bac3a2e6f9198cc329b832ad6797550ab170be9fdf82f4ef1480030abdfd61e7"} Jan 30 21:33:06 crc kubenswrapper[4751]: I0130 21:33:06.878716 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9zjh6" event={"ID":"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4","Type":"ContainerStarted","Data":"ea35b1eeeee23dfe668c962048df516377a773b512369ce63c18d112a1f5e9c6"} Jan 30 21:33:06 crc kubenswrapper[4751]: I0130 21:33:06.878730 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9zjh6" event={"ID":"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4","Type":"ContainerStarted","Data":"eee5744605a48ea2d4d152a246f8caa40dcba4a63ba5a1e3980e5854560b4766"} Jan 30 21:33:06 crc kubenswrapper[4751]: I0130 21:33:06.878743 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9zjh6" event={"ID":"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4","Type":"ContainerStarted","Data":"5d7837c995592c1ca2ab41530b875ef8f8888c556ed3b0016fffba70feb5d996"} Jan 30 21:33:06 crc kubenswrapper[4751]: I0130 21:33:06.878761 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:33:06 crc kubenswrapper[4751]: I0130 21:33:06.905134 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-9zjh6" podStartSLOduration=5.744765898 podStartE2EDuration="13.905115831s" podCreationTimestamp="2026-01-30 21:32:53 +0000 UTC" firstStartedPulling="2026-01-30 21:32:54.361958359 +0000 UTC m=+1113.107781008" lastFinishedPulling="2026-01-30 21:33:02.522308272 +0000 UTC m=+1121.268130941" observedRunningTime="2026-01-30 21:33:06.901753251 +0000 UTC m=+1125.647575940" watchObservedRunningTime="2026-01-30 21:33:06.905115831 +0000 UTC m=+1125.650938480" Jan 30 21:33:09 crc kubenswrapper[4751]: I0130 21:33:09.183706 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:33:09 crc kubenswrapper[4751]: I0130 21:33:09.238701 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:33:14 crc kubenswrapper[4751]: I0130 21:33:14.808201 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" Jan 30 21:33:15 crc kubenswrapper[4751]: I0130 21:33:15.802626 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-zqbmp" Jan 30 21:33:18 crc kubenswrapper[4751]: I0130 21:33:18.633717 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6xdfj"] Jan 30 21:33:18 crc kubenswrapper[4751]: I0130 21:33:18.635341 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6xdfj" Jan 30 21:33:18 crc kubenswrapper[4751]: I0130 21:33:18.645894 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6xdfj"] Jan 30 21:33:18 crc kubenswrapper[4751]: I0130 21:33:18.649736 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-c2zxr" Jan 30 21:33:18 crc kubenswrapper[4751]: I0130 21:33:18.649966 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 30 21:33:18 crc kubenswrapper[4751]: I0130 21:33:18.650150 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 30 21:33:18 crc kubenswrapper[4751]: I0130 21:33:18.716945 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmxrv\" (UniqueName: \"kubernetes.io/projected/c7891c80-5b04-4a6e-8b3b-8b68efa8a6de-kube-api-access-cmxrv\") pod \"openstack-operator-index-6xdfj\" (UID: \"c7891c80-5b04-4a6e-8b3b-8b68efa8a6de\") " pod="openstack-operators/openstack-operator-index-6xdfj" Jan 30 21:33:18 crc kubenswrapper[4751]: I0130 21:33:18.818667 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmxrv\" (UniqueName: \"kubernetes.io/projected/c7891c80-5b04-4a6e-8b3b-8b68efa8a6de-kube-api-access-cmxrv\") pod \"openstack-operator-index-6xdfj\" (UID: \"c7891c80-5b04-4a6e-8b3b-8b68efa8a6de\") " pod="openstack-operators/openstack-operator-index-6xdfj" Jan 30 21:33:18 crc kubenswrapper[4751]: I0130 21:33:18.850934 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmxrv\" (UniqueName: \"kubernetes.io/projected/c7891c80-5b04-4a6e-8b3b-8b68efa8a6de-kube-api-access-cmxrv\") pod \"openstack-operator-index-6xdfj\" (UID: \"c7891c80-5b04-4a6e-8b3b-8b68efa8a6de\") " pod="openstack-operators/openstack-operator-index-6xdfj" Jan 30 21:33:18 crc kubenswrapper[4751]: I0130 21:33:18.991277 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6xdfj" Jan 30 21:33:19 crc kubenswrapper[4751]: I0130 21:33:19.453898 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6xdfj"] Jan 30 21:33:20 crc kubenswrapper[4751]: I0130 21:33:20.000311 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6xdfj" event={"ID":"c7891c80-5b04-4a6e-8b3b-8b68efa8a6de","Type":"ContainerStarted","Data":"5085fba647ce3ba6d64c0eb174bd6b9b77d5b038e1666bf227ff6b20406417e9"} Jan 30 21:33:22 crc kubenswrapper[4751]: I0130 21:33:22.011184 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-6xdfj"] Jan 30 21:33:22 crc kubenswrapper[4751]: I0130 21:33:22.608072 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-lw6gm"] Jan 30 21:33:22 crc kubenswrapper[4751]: I0130 21:33:22.609677 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lw6gm" Jan 30 21:33:22 crc kubenswrapper[4751]: I0130 21:33:22.621623 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-lw6gm"] Jan 30 21:33:22 crc kubenswrapper[4751]: I0130 21:33:22.707931 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqf5w\" (UniqueName: \"kubernetes.io/projected/bd6eaa60-4995-4ace-8ab0-a880f09cbee0-kube-api-access-bqf5w\") pod \"openstack-operator-index-lw6gm\" (UID: \"bd6eaa60-4995-4ace-8ab0-a880f09cbee0\") " pod="openstack-operators/openstack-operator-index-lw6gm" Jan 30 21:33:22 crc kubenswrapper[4751]: I0130 21:33:22.809649 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqf5w\" (UniqueName: \"kubernetes.io/projected/bd6eaa60-4995-4ace-8ab0-a880f09cbee0-kube-api-access-bqf5w\") pod \"openstack-operator-index-lw6gm\" (UID: \"bd6eaa60-4995-4ace-8ab0-a880f09cbee0\") " pod="openstack-operators/openstack-operator-index-lw6gm" Jan 30 21:33:22 crc kubenswrapper[4751]: I0130 21:33:22.834056 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqf5w\" (UniqueName: \"kubernetes.io/projected/bd6eaa60-4995-4ace-8ab0-a880f09cbee0-kube-api-access-bqf5w\") pod \"openstack-operator-index-lw6gm\" (UID: \"bd6eaa60-4995-4ace-8ab0-a880f09cbee0\") " pod="openstack-operators/openstack-operator-index-lw6gm" Jan 30 21:33:22 crc kubenswrapper[4751]: I0130 21:33:22.945211 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lw6gm" Jan 30 21:33:23 crc kubenswrapper[4751]: I0130 21:33:23.035427 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6xdfj" event={"ID":"c7891c80-5b04-4a6e-8b3b-8b68efa8a6de","Type":"ContainerStarted","Data":"9376319cc15e13d6262375aad9167f8895d809e414a3e8d7a7d4cf10f1c32181"} Jan 30 21:33:23 crc kubenswrapper[4751]: I0130 21:33:23.035574 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-6xdfj" podUID="c7891c80-5b04-4a6e-8b3b-8b68efa8a6de" containerName="registry-server" containerID="cri-o://9376319cc15e13d6262375aad9167f8895d809e414a3e8d7a7d4cf10f1c32181" gracePeriod=2 Jan 30 21:33:23 crc kubenswrapper[4751]: I0130 21:33:23.414529 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6xdfj" podStartSLOduration=2.930631172 podStartE2EDuration="5.414512634s" podCreationTimestamp="2026-01-30 21:33:18 +0000 UTC" firstStartedPulling="2026-01-30 21:33:19.459950647 +0000 UTC m=+1138.205773296" lastFinishedPulling="2026-01-30 21:33:21.943832099 +0000 UTC m=+1140.689654758" observedRunningTime="2026-01-30 21:33:23.06282562 +0000 UTC m=+1141.808648269" watchObservedRunningTime="2026-01-30 21:33:23.414512634 +0000 UTC m=+1142.160335283" Jan 30 21:33:23 crc kubenswrapper[4751]: I0130 21:33:23.420344 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-lw6gm"] Jan 30 21:33:23 crc kubenswrapper[4751]: W0130 21:33:23.424866 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd6eaa60_4995_4ace_8ab0_a880f09cbee0.slice/crio-173417603d83ece8479c25c17c4d08e017273465709efb2719a980a267da39f4 WatchSource:0}: Error finding container 173417603d83ece8479c25c17c4d08e017273465709efb2719a980a267da39f4: Status 404 returned error can't find the container with id 173417603d83ece8479c25c17c4d08e017273465709efb2719a980a267da39f4 Jan 30 21:33:23 crc kubenswrapper[4751]: I0130 21:33:23.754300 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6xdfj" Jan 30 21:33:23 crc kubenswrapper[4751]: I0130 21:33:23.827133 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmxrv\" (UniqueName: \"kubernetes.io/projected/c7891c80-5b04-4a6e-8b3b-8b68efa8a6de-kube-api-access-cmxrv\") pod \"c7891c80-5b04-4a6e-8b3b-8b68efa8a6de\" (UID: \"c7891c80-5b04-4a6e-8b3b-8b68efa8a6de\") " Jan 30 21:33:23 crc kubenswrapper[4751]: I0130 21:33:23.833681 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7891c80-5b04-4a6e-8b3b-8b68efa8a6de-kube-api-access-cmxrv" (OuterVolumeSpecName: "kube-api-access-cmxrv") pod "c7891c80-5b04-4a6e-8b3b-8b68efa8a6de" (UID: "c7891c80-5b04-4a6e-8b3b-8b68efa8a6de"). InnerVolumeSpecName "kube-api-access-cmxrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:33:23 crc kubenswrapper[4751]: I0130 21:33:23.929782 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmxrv\" (UniqueName: \"kubernetes.io/projected/c7891c80-5b04-4a6e-8b3b-8b68efa8a6de-kube-api-access-cmxrv\") on node \"crc\" DevicePath \"\"" Jan 30 21:33:24 crc kubenswrapper[4751]: I0130 21:33:24.045222 4751 generic.go:334] "Generic (PLEG): container finished" podID="c7891c80-5b04-4a6e-8b3b-8b68efa8a6de" containerID="9376319cc15e13d6262375aad9167f8895d809e414a3e8d7a7d4cf10f1c32181" exitCode=0 Jan 30 21:33:24 crc kubenswrapper[4751]: I0130 21:33:24.045295 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6xdfj" event={"ID":"c7891c80-5b04-4a6e-8b3b-8b68efa8a6de","Type":"ContainerDied","Data":"9376319cc15e13d6262375aad9167f8895d809e414a3e8d7a7d4cf10f1c32181"} Jan 30 21:33:24 crc kubenswrapper[4751]: I0130 21:33:24.045343 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6xdfj" event={"ID":"c7891c80-5b04-4a6e-8b3b-8b68efa8a6de","Type":"ContainerDied","Data":"5085fba647ce3ba6d64c0eb174bd6b9b77d5b038e1666bf227ff6b20406417e9"} Jan 30 21:33:24 crc kubenswrapper[4751]: I0130 21:33:24.045367 4751 scope.go:117] "RemoveContainer" containerID="9376319cc15e13d6262375aad9167f8895d809e414a3e8d7a7d4cf10f1c32181" Jan 30 21:33:24 crc kubenswrapper[4751]: I0130 21:33:24.045477 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6xdfj" Jan 30 21:33:24 crc kubenswrapper[4751]: I0130 21:33:24.046857 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lw6gm" event={"ID":"bd6eaa60-4995-4ace-8ab0-a880f09cbee0","Type":"ContainerStarted","Data":"04a01555b411ff9399ce63606967837bf71c09ec36c90c1b441a41babccc5f24"} Jan 30 21:33:24 crc kubenswrapper[4751]: I0130 21:33:24.046900 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lw6gm" event={"ID":"bd6eaa60-4995-4ace-8ab0-a880f09cbee0","Type":"ContainerStarted","Data":"173417603d83ece8479c25c17c4d08e017273465709efb2719a980a267da39f4"} Jan 30 21:33:24 crc kubenswrapper[4751]: I0130 21:33:24.069727 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-lw6gm" podStartSLOduration=1.812344234 podStartE2EDuration="2.069706368s" podCreationTimestamp="2026-01-30 21:33:22 +0000 UTC" firstStartedPulling="2026-01-30 21:33:23.428240332 +0000 UTC m=+1142.174062981" lastFinishedPulling="2026-01-30 21:33:23.685602466 +0000 UTC m=+1142.431425115" observedRunningTime="2026-01-30 21:33:24.061571 +0000 UTC m=+1142.807393649" watchObservedRunningTime="2026-01-30 21:33:24.069706368 +0000 UTC m=+1142.815529017" Jan 30 21:33:24 crc kubenswrapper[4751]: I0130 21:33:24.074782 4751 scope.go:117] "RemoveContainer" containerID="9376319cc15e13d6262375aad9167f8895d809e414a3e8d7a7d4cf10f1c32181" Jan 30 21:33:24 crc kubenswrapper[4751]: E0130 21:33:24.075224 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9376319cc15e13d6262375aad9167f8895d809e414a3e8d7a7d4cf10f1c32181\": container with ID starting with 9376319cc15e13d6262375aad9167f8895d809e414a3e8d7a7d4cf10f1c32181 not found: ID does not exist" containerID="9376319cc15e13d6262375aad9167f8895d809e414a3e8d7a7d4cf10f1c32181" Jan 30 21:33:24 crc kubenswrapper[4751]: I0130 21:33:24.075267 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9376319cc15e13d6262375aad9167f8895d809e414a3e8d7a7d4cf10f1c32181"} err="failed to get container status \"9376319cc15e13d6262375aad9167f8895d809e414a3e8d7a7d4cf10f1c32181\": rpc error: code = NotFound desc = could not find container \"9376319cc15e13d6262375aad9167f8895d809e414a3e8d7a7d4cf10f1c32181\": container with ID starting with 9376319cc15e13d6262375aad9167f8895d809e414a3e8d7a7d4cf10f1c32181 not found: ID does not exist" Jan 30 21:33:24 crc kubenswrapper[4751]: I0130 21:33:24.082114 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-6xdfj"] Jan 30 21:33:24 crc kubenswrapper[4751]: I0130 21:33:24.088959 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-6xdfj"] Jan 30 21:33:24 crc kubenswrapper[4751]: I0130 21:33:24.185767 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:33:25 crc kubenswrapper[4751]: I0130 21:33:25.994188 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7891c80-5b04-4a6e-8b3b-8b68efa8a6de" path="/var/lib/kubelet/pods/c7891c80-5b04-4a6e-8b3b-8b68efa8a6de/volumes" Jan 30 21:33:32 crc kubenswrapper[4751]: I0130 21:33:32.945740 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-lw6gm" Jan 30 21:33:32 crc kubenswrapper[4751]: I0130 21:33:32.946347 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-lw6gm" Jan 30 21:33:32 crc kubenswrapper[4751]: I0130 21:33:32.994873 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-lw6gm" Jan 30 21:33:33 crc kubenswrapper[4751]: I0130 21:33:33.189463 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-lw6gm" Jan 30 21:33:34 crc kubenswrapper[4751]: I0130 21:33:34.854825 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m"] Jan 30 21:33:34 crc kubenswrapper[4751]: E0130 21:33:34.855182 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7891c80-5b04-4a6e-8b3b-8b68efa8a6de" containerName="registry-server" Jan 30 21:33:34 crc kubenswrapper[4751]: I0130 21:33:34.855196 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7891c80-5b04-4a6e-8b3b-8b68efa8a6de" containerName="registry-server" Jan 30 21:33:34 crc kubenswrapper[4751]: I0130 21:33:34.855398 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7891c80-5b04-4a6e-8b3b-8b68efa8a6de" containerName="registry-server" Jan 30 21:33:34 crc kubenswrapper[4751]: I0130 21:33:34.856749 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" Jan 30 21:33:34 crc kubenswrapper[4751]: I0130 21:33:34.859750 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hcnbk" Jan 30 21:33:34 crc kubenswrapper[4751]: I0130 21:33:34.870508 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m"] Jan 30 21:33:34 crc kubenswrapper[4751]: I0130 21:33:34.941463 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8fed4afd-9214-4ec9-816d-2ba6213f2f89-bundle\") pod \"c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m\" (UID: \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\") " pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" Jan 30 21:33:34 crc kubenswrapper[4751]: I0130 21:33:34.941578 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb9cl\" (UniqueName: \"kubernetes.io/projected/8fed4afd-9214-4ec9-816d-2ba6213f2f89-kube-api-access-vb9cl\") pod \"c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m\" (UID: \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\") " pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" Jan 30 21:33:34 crc kubenswrapper[4751]: I0130 21:33:34.941674 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8fed4afd-9214-4ec9-816d-2ba6213f2f89-util\") pod \"c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m\" (UID: \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\") " pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" Jan 30 21:33:35 crc kubenswrapper[4751]: I0130 21:33:35.042700 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb9cl\" (UniqueName: \"kubernetes.io/projected/8fed4afd-9214-4ec9-816d-2ba6213f2f89-kube-api-access-vb9cl\") pod \"c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m\" (UID: \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\") " pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" Jan 30 21:33:35 crc kubenswrapper[4751]: I0130 21:33:35.042793 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8fed4afd-9214-4ec9-816d-2ba6213f2f89-util\") pod \"c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m\" (UID: \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\") " pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" Jan 30 21:33:35 crc kubenswrapper[4751]: I0130 21:33:35.042884 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8fed4afd-9214-4ec9-816d-2ba6213f2f89-bundle\") pod \"c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m\" (UID: \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\") " pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" Jan 30 21:33:35 crc kubenswrapper[4751]: I0130 21:33:35.043533 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8fed4afd-9214-4ec9-816d-2ba6213f2f89-bundle\") pod \"c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m\" (UID: \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\") " pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" Jan 30 21:33:35 crc kubenswrapper[4751]: I0130 21:33:35.044809 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8fed4afd-9214-4ec9-816d-2ba6213f2f89-util\") pod \"c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m\" (UID: \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\") " pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" Jan 30 21:33:35 crc kubenswrapper[4751]: I0130 21:33:35.065302 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb9cl\" (UniqueName: \"kubernetes.io/projected/8fed4afd-9214-4ec9-816d-2ba6213f2f89-kube-api-access-vb9cl\") pod \"c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m\" (UID: \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\") " pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" Jan 30 21:33:35 crc kubenswrapper[4751]: I0130 21:33:35.179762 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" Jan 30 21:33:35 crc kubenswrapper[4751]: I0130 21:33:35.662196 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m"] Jan 30 21:33:36 crc kubenswrapper[4751]: I0130 21:33:36.173190 4751 generic.go:334] "Generic (PLEG): container finished" podID="8fed4afd-9214-4ec9-816d-2ba6213f2f89" containerID="1c1c6d4eeb91da764a539ab8a25b7d05749856e70d897106bbe6532c1dabd417" exitCode=0 Jan 30 21:33:36 crc kubenswrapper[4751]: I0130 21:33:36.173253 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" event={"ID":"8fed4afd-9214-4ec9-816d-2ba6213f2f89","Type":"ContainerDied","Data":"1c1c6d4eeb91da764a539ab8a25b7d05749856e70d897106bbe6532c1dabd417"} Jan 30 21:33:36 crc kubenswrapper[4751]: I0130 21:33:36.173485 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" event={"ID":"8fed4afd-9214-4ec9-816d-2ba6213f2f89","Type":"ContainerStarted","Data":"7880f66f06961b7d6ee3bd6e6f25aa3acac10467a2d0d4d0f52f9bad3784aa79"} Jan 30 21:33:37 crc kubenswrapper[4751]: I0130 21:33:37.187422 4751 generic.go:334] "Generic (PLEG): container finished" podID="8fed4afd-9214-4ec9-816d-2ba6213f2f89" containerID="19014789ea6964b73200fa154d8d665762c73630693484d03da67952be842860" exitCode=0 Jan 30 21:33:37 crc kubenswrapper[4751]: I0130 21:33:37.187545 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" event={"ID":"8fed4afd-9214-4ec9-816d-2ba6213f2f89","Type":"ContainerDied","Data":"19014789ea6964b73200fa154d8d665762c73630693484d03da67952be842860"} Jan 30 21:33:38 crc kubenswrapper[4751]: I0130 21:33:38.204782 4751 generic.go:334] "Generic (PLEG): container finished" podID="8fed4afd-9214-4ec9-816d-2ba6213f2f89" containerID="be1d4829ebfafdab5f1e2d575ec83c7e9a122f738f0913a2c9ff50b4666798fc" exitCode=0 Jan 30 21:33:38 crc kubenswrapper[4751]: I0130 21:33:38.205187 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" event={"ID":"8fed4afd-9214-4ec9-816d-2ba6213f2f89","Type":"ContainerDied","Data":"be1d4829ebfafdab5f1e2d575ec83c7e9a122f738f0913a2c9ff50b4666798fc"} Jan 30 21:33:39 crc kubenswrapper[4751]: I0130 21:33:39.653247 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" Jan 30 21:33:39 crc kubenswrapper[4751]: I0130 21:33:39.731796 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb9cl\" (UniqueName: \"kubernetes.io/projected/8fed4afd-9214-4ec9-816d-2ba6213f2f89-kube-api-access-vb9cl\") pod \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\" (UID: \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\") " Jan 30 21:33:39 crc kubenswrapper[4751]: I0130 21:33:39.731955 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8fed4afd-9214-4ec9-816d-2ba6213f2f89-bundle\") pod \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\" (UID: \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\") " Jan 30 21:33:39 crc kubenswrapper[4751]: I0130 21:33:39.732041 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8fed4afd-9214-4ec9-816d-2ba6213f2f89-util\") pod \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\" (UID: \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\") " Jan 30 21:33:39 crc kubenswrapper[4751]: I0130 21:33:39.732679 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fed4afd-9214-4ec9-816d-2ba6213f2f89-bundle" (OuterVolumeSpecName: "bundle") pod "8fed4afd-9214-4ec9-816d-2ba6213f2f89" (UID: "8fed4afd-9214-4ec9-816d-2ba6213f2f89"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:33:39 crc kubenswrapper[4751]: I0130 21:33:39.739575 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fed4afd-9214-4ec9-816d-2ba6213f2f89-kube-api-access-vb9cl" (OuterVolumeSpecName: "kube-api-access-vb9cl") pod "8fed4afd-9214-4ec9-816d-2ba6213f2f89" (UID: "8fed4afd-9214-4ec9-816d-2ba6213f2f89"). InnerVolumeSpecName "kube-api-access-vb9cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:33:39 crc kubenswrapper[4751]: I0130 21:33:39.748132 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fed4afd-9214-4ec9-816d-2ba6213f2f89-util" (OuterVolumeSpecName: "util") pod "8fed4afd-9214-4ec9-816d-2ba6213f2f89" (UID: "8fed4afd-9214-4ec9-816d-2ba6213f2f89"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:33:39 crc kubenswrapper[4751]: I0130 21:33:39.834561 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb9cl\" (UniqueName: \"kubernetes.io/projected/8fed4afd-9214-4ec9-816d-2ba6213f2f89-kube-api-access-vb9cl\") on node \"crc\" DevicePath \"\"" Jan 30 21:33:39 crc kubenswrapper[4751]: I0130 21:33:39.834592 4751 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8fed4afd-9214-4ec9-816d-2ba6213f2f89-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:33:39 crc kubenswrapper[4751]: I0130 21:33:39.834601 4751 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8fed4afd-9214-4ec9-816d-2ba6213f2f89-util\") on node \"crc\" DevicePath \"\"" Jan 30 21:33:40 crc kubenswrapper[4751]: I0130 21:33:40.228047 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" event={"ID":"8fed4afd-9214-4ec9-816d-2ba6213f2f89","Type":"ContainerDied","Data":"7880f66f06961b7d6ee3bd6e6f25aa3acac10467a2d0d4d0f52f9bad3784aa79"} Jan 30 21:33:40 crc kubenswrapper[4751]: I0130 21:33:40.228108 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7880f66f06961b7d6ee3bd6e6f25aa3acac10467a2d0d4d0f52f9bad3784aa79" Jan 30 21:33:40 crc kubenswrapper[4751]: I0130 21:33:40.228122 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" Jan 30 21:33:46 crc kubenswrapper[4751]: I0130 21:33:46.778686 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-55fdcd6c79-9hzxh"] Jan 30 21:33:46 crc kubenswrapper[4751]: E0130 21:33:46.779633 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fed4afd-9214-4ec9-816d-2ba6213f2f89" containerName="pull" Jan 30 21:33:46 crc kubenswrapper[4751]: I0130 21:33:46.779650 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fed4afd-9214-4ec9-816d-2ba6213f2f89" containerName="pull" Jan 30 21:33:46 crc kubenswrapper[4751]: E0130 21:33:46.779694 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fed4afd-9214-4ec9-816d-2ba6213f2f89" containerName="extract" Jan 30 21:33:46 crc kubenswrapper[4751]: I0130 21:33:46.779702 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fed4afd-9214-4ec9-816d-2ba6213f2f89" containerName="extract" Jan 30 21:33:46 crc kubenswrapper[4751]: E0130 21:33:46.779724 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fed4afd-9214-4ec9-816d-2ba6213f2f89" containerName="util" Jan 30 21:33:46 crc kubenswrapper[4751]: I0130 21:33:46.779734 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fed4afd-9214-4ec9-816d-2ba6213f2f89" containerName="util" Jan 30 21:33:46 crc kubenswrapper[4751]: I0130 21:33:46.779901 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fed4afd-9214-4ec9-816d-2ba6213f2f89" containerName="extract" Jan 30 21:33:46 crc kubenswrapper[4751]: I0130 21:33:46.780508 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-55fdcd6c79-9hzxh" Jan 30 21:33:46 crc kubenswrapper[4751]: I0130 21:33:46.782933 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-z5j4l" Jan 30 21:33:46 crc kubenswrapper[4751]: I0130 21:33:46.807857 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-55fdcd6c79-9hzxh"] Jan 30 21:33:46 crc kubenswrapper[4751]: I0130 21:33:46.827838 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxmp4\" (UniqueName: \"kubernetes.io/projected/4b543295-a1a6-40ad-8b74-0ee6fdeb66c3-kube-api-access-kxmp4\") pod \"openstack-operator-controller-init-55fdcd6c79-9hzxh\" (UID: \"4b543295-a1a6-40ad-8b74-0ee6fdeb66c3\") " pod="openstack-operators/openstack-operator-controller-init-55fdcd6c79-9hzxh" Jan 30 21:33:46 crc kubenswrapper[4751]: I0130 21:33:46.929286 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxmp4\" (UniqueName: \"kubernetes.io/projected/4b543295-a1a6-40ad-8b74-0ee6fdeb66c3-kube-api-access-kxmp4\") pod \"openstack-operator-controller-init-55fdcd6c79-9hzxh\" (UID: \"4b543295-a1a6-40ad-8b74-0ee6fdeb66c3\") " pod="openstack-operators/openstack-operator-controller-init-55fdcd6c79-9hzxh" Jan 30 21:33:46 crc kubenswrapper[4751]: I0130 21:33:46.952142 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxmp4\" (UniqueName: \"kubernetes.io/projected/4b543295-a1a6-40ad-8b74-0ee6fdeb66c3-kube-api-access-kxmp4\") pod \"openstack-operator-controller-init-55fdcd6c79-9hzxh\" (UID: \"4b543295-a1a6-40ad-8b74-0ee6fdeb66c3\") " pod="openstack-operators/openstack-operator-controller-init-55fdcd6c79-9hzxh" Jan 30 21:33:47 crc kubenswrapper[4751]: I0130 21:33:47.100117 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-55fdcd6c79-9hzxh" Jan 30 21:33:47 crc kubenswrapper[4751]: I0130 21:33:47.532849 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-55fdcd6c79-9hzxh"] Jan 30 21:33:48 crc kubenswrapper[4751]: I0130 21:33:48.300051 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-55fdcd6c79-9hzxh" event={"ID":"4b543295-a1a6-40ad-8b74-0ee6fdeb66c3","Type":"ContainerStarted","Data":"f3bc77c8b555eea9398a5ffbbdf29466862a4aa04d846d3bb25cdf36aeff59c8"} Jan 30 21:33:52 crc kubenswrapper[4751]: I0130 21:33:52.353083 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-55fdcd6c79-9hzxh" event={"ID":"4b543295-a1a6-40ad-8b74-0ee6fdeb66c3","Type":"ContainerStarted","Data":"6a0c2990e2faf280012ac6c31ac61a3e5de4c2543e26bfd80b31434ec494eb62"} Jan 30 21:33:52 crc kubenswrapper[4751]: I0130 21:33:52.353907 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-55fdcd6c79-9hzxh" Jan 30 21:33:52 crc kubenswrapper[4751]: I0130 21:33:52.389895 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-55fdcd6c79-9hzxh" podStartSLOduration=1.781139526 podStartE2EDuration="6.389868562s" podCreationTimestamp="2026-01-30 21:33:46 +0000 UTC" firstStartedPulling="2026-01-30 21:33:47.54014786 +0000 UTC m=+1166.285970509" lastFinishedPulling="2026-01-30 21:33:52.148876896 +0000 UTC m=+1170.894699545" observedRunningTime="2026-01-30 21:33:52.380429361 +0000 UTC m=+1171.126252050" watchObservedRunningTime="2026-01-30 21:33:52.389868562 +0000 UTC m=+1171.135691231" Jan 30 21:33:57 crc kubenswrapper[4751]: I0130 21:33:57.105198 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-55fdcd6c79-9hzxh" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.603990 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7mpjw"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.605465 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7mpjw" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.612348 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-j569z" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.624114 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.625452 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.627308 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-6q6zs" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.635634 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7mpjw"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.640967 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.641897 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.650797 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.660118 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-lvzzp" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.665429 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-b65fl"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.666443 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-b65fl" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.668127 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-tlnws" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.684171 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.695470 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-b65fl"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.700492 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.701530 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.703987 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clkgj\" (UniqueName: \"kubernetes.io/projected/236db419-e197-4a85-ab49-58cf38babea6-kube-api-access-clkgj\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-7mpjw\" (UID: \"236db419-e197-4a85-ab49-58cf38babea6\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7mpjw" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.704058 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqgfs\" (UniqueName: \"kubernetes.io/projected/f8cf0eb3-a93d-4462-b5ac-bbaaebf6daf9-kube-api-access-wqgfs\") pod \"designate-operator-controller-manager-6d9697b7f4-ph5lf\" (UID: \"f8cf0eb3-a93d-4462-b5ac-bbaaebf6daf9\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.704082 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rccf7\" (UniqueName: \"kubernetes.io/projected/9003ffe6-59a3-4c7c-96d0-d129a9339247-kube-api-access-rccf7\") pod \"cinder-operator-controller-manager-8d874c8fc-6fg4r\" (UID: \"9003ffe6-59a3-4c7c-96d0-d129a9339247\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.704434 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-c76qn" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.715740 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.781389 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.782497 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.788730 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-bq788" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.796001 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-n2shb"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.797006 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-n2shb" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.806861 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clkgj\" (UniqueName: \"kubernetes.io/projected/236db419-e197-4a85-ab49-58cf38babea6-kube-api-access-clkgj\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-7mpjw\" (UID: \"236db419-e197-4a85-ab49-58cf38babea6\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7mpjw" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.806940 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nbx4\" (UniqueName: \"kubernetes.io/projected/3fae5204-d3a1-4e39-ac3d-d28c8a55c7db-kube-api-access-4nbx4\") pod \"heat-operator-controller-manager-69d6db494d-jxkmf\" (UID: \"3fae5204-d3a1-4e39-ac3d-d28c8a55c7db\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.806974 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqgfs\" (UniqueName: \"kubernetes.io/projected/f8cf0eb3-a93d-4462-b5ac-bbaaebf6daf9-kube-api-access-wqgfs\") pod \"designate-operator-controller-manager-6d9697b7f4-ph5lf\" (UID: \"f8cf0eb3-a93d-4462-b5ac-bbaaebf6daf9\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.806996 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rccf7\" (UniqueName: \"kubernetes.io/projected/9003ffe6-59a3-4c7c-96d0-d129a9339247-kube-api-access-rccf7\") pod \"cinder-operator-controller-manager-8d874c8fc-6fg4r\" (UID: \"9003ffe6-59a3-4c7c-96d0-d129a9339247\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.807063 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74wpd\" (UniqueName: \"kubernetes.io/projected/0fd5051a-5be4-4336-af86-9674469b76a0-kube-api-access-74wpd\") pod \"glance-operator-controller-manager-8886f4c47-b65fl\" (UID: \"0fd5051a-5be4-4336-af86-9674469b76a0\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-b65fl" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.811722 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-c58hn" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.828130 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-52vr2"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.836666 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rccf7\" (UniqueName: \"kubernetes.io/projected/9003ffe6-59a3-4c7c-96d0-d129a9339247-kube-api-access-rccf7\") pod \"cinder-operator-controller-manager-8d874c8fc-6fg4r\" (UID: \"9003ffe6-59a3-4c7c-96d0-d129a9339247\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.836867 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clkgj\" (UniqueName: \"kubernetes.io/projected/236db419-e197-4a85-ab49-58cf38babea6-kube-api-access-clkgj\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-7mpjw\" (UID: \"236db419-e197-4a85-ab49-58cf38babea6\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7mpjw" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.839042 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqgfs\" (UniqueName: \"kubernetes.io/projected/f8cf0eb3-a93d-4462-b5ac-bbaaebf6daf9-kube-api-access-wqgfs\") pod \"designate-operator-controller-manager-6d9697b7f4-ph5lf\" (UID: \"f8cf0eb3-a93d-4462-b5ac-bbaaebf6daf9\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.850414 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.850476 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-52vr2"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.850486 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-n2shb"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.850575 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.853058 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-p7wfp" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.853540 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.860715 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.862064 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.871975 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-4f62r" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.872104 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.873071 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.875735 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-8lzdh" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.894793 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.908643 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.909493 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2fmm\" (UniqueName: \"kubernetes.io/projected/9a88f139-89db-4b3a-8fea-bf951e59f564-kube-api-access-t2fmm\") pod \"ironic-operator-controller-manager-5f4b8bd54d-n2shb\" (UID: \"9a88f139-89db-4b3a-8fea-bf951e59f564\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-n2shb" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.909535 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zc9p\" (UniqueName: \"kubernetes.io/projected/0b3a96d4-f5fc-47be-9c28-47239b2488c1-kube-api-access-4zc9p\") pod \"horizon-operator-controller-manager-5fb775575f-hsbbr\" (UID: \"0b3a96d4-f5fc-47be-9c28-47239b2488c1\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.909564 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nbx4\" (UniqueName: \"kubernetes.io/projected/3fae5204-d3a1-4e39-ac3d-d28c8a55c7db-kube-api-access-4nbx4\") pod \"heat-operator-controller-manager-69d6db494d-jxkmf\" (UID: \"3fae5204-d3a1-4e39-ac3d-d28c8a55c7db\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.909628 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74wpd\" (UniqueName: \"kubernetes.io/projected/0fd5051a-5be4-4336-af86-9674469b76a0-kube-api-access-74wpd\") pod \"glance-operator-controller-manager-8886f4c47-b65fl\" (UID: \"0fd5051a-5be4-4336-af86-9674469b76a0\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-b65fl" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.920772 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-xk52h"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.922601 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xk52h" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.927374 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7mpjw" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.927956 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-gxtrq" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.937854 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nbx4\" (UniqueName: \"kubernetes.io/projected/3fae5204-d3a1-4e39-ac3d-d28c8a55c7db-kube-api-access-4nbx4\") pod \"heat-operator-controller-manager-69d6db494d-jxkmf\" (UID: \"3fae5204-d3a1-4e39-ac3d-d28c8a55c7db\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.942726 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74wpd\" (UniqueName: \"kubernetes.io/projected/0fd5051a-5be4-4336-af86-9674469b76a0-kube-api-access-74wpd\") pod \"glance-operator-controller-manager-8886f4c47-b65fl\" (UID: \"0fd5051a-5be4-4336-af86-9674469b76a0\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-b65fl" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.947024 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.957997 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.959435 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.964868 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-gbq4n" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.965739 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.003667 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-b65fl" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.010406 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-xk52h"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.015562 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2fmm\" (UniqueName: \"kubernetes.io/projected/9a88f139-89db-4b3a-8fea-bf951e59f564-kube-api-access-t2fmm\") pod \"ironic-operator-controller-manager-5f4b8bd54d-n2shb\" (UID: \"9a88f139-89db-4b3a-8fea-bf951e59f564\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-n2shb" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.015608 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-628b2\" (UniqueName: \"kubernetes.io/projected/1ad347ea-d2ce-4a1e-912a-8471445396f7-kube-api-access-628b2\") pod \"mariadb-operator-controller-manager-67bf948998-xk52h\" (UID: \"1ad347ea-d2ce-4a1e-912a-8471445396f7\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xk52h" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.015655 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zc9p\" (UniqueName: \"kubernetes.io/projected/0b3a96d4-f5fc-47be-9c28-47239b2488c1-kube-api-access-4zc9p\") pod \"horizon-operator-controller-manager-5fb775575f-hsbbr\" (UID: \"0b3a96d4-f5fc-47be-9c28-47239b2488c1\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.015684 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gn9l\" (UniqueName: \"kubernetes.io/projected/694b29bc-994c-4983-81c7-b32d47db553b-kube-api-access-9gn9l\") pod \"manila-operator-controller-manager-7dd968899f-7sk5v\" (UID: \"694b29bc-994c-4983-81c7-b32d47db553b\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.015712 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert\") pod \"infra-operator-controller-manager-79955696d6-52vr2\" (UID: \"2d6f1acc-6416-44ae-9082-3ebe16dce448\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.015740 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcnrd\" (UniqueName: \"kubernetes.io/projected/b2777bff-2cca-4f41-8655-a737f13b4885-kube-api-access-gcnrd\") pod \"keystone-operator-controller-manager-84f48565d4-sw6zv\" (UID: \"b2777bff-2cca-4f41-8655-a737f13b4885\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.015825 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j55lm\" (UniqueName: \"kubernetes.io/projected/2d6f1acc-6416-44ae-9082-3ebe16dce448-kube-api-access-j55lm\") pod \"infra-operator-controller-manager-79955696d6-52vr2\" (UID: \"2d6f1acc-6416-44ae-9082-3ebe16dce448\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.036652 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.040769 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zc9p\" (UniqueName: \"kubernetes.io/projected/0b3a96d4-f5fc-47be-9c28-47239b2488c1-kube-api-access-4zc9p\") pod \"horizon-operator-controller-manager-5fb775575f-hsbbr\" (UID: \"0b3a96d4-f5fc-47be-9c28-47239b2488c1\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.045271 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2fmm\" (UniqueName: \"kubernetes.io/projected/9a88f139-89db-4b3a-8fea-bf951e59f564-kube-api-access-t2fmm\") pod \"ironic-operator-controller-manager-5f4b8bd54d-n2shb\" (UID: \"9a88f139-89db-4b3a-8fea-bf951e59f564\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-n2shb" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.068601 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.073918 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.075346 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.081265 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-6zms8" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.081367 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.107926 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.117249 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gn9l\" (UniqueName: \"kubernetes.io/projected/694b29bc-994c-4983-81c7-b32d47db553b-kube-api-access-9gn9l\") pod \"manila-operator-controller-manager-7dd968899f-7sk5v\" (UID: \"694b29bc-994c-4983-81c7-b32d47db553b\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.117311 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert\") pod \"infra-operator-controller-manager-79955696d6-52vr2\" (UID: \"2d6f1acc-6416-44ae-9082-3ebe16dce448\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.117357 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcnrd\" (UniqueName: \"kubernetes.io/projected/b2777bff-2cca-4f41-8655-a737f13b4885-kube-api-access-gcnrd\") pod \"keystone-operator-controller-manager-84f48565d4-sw6zv\" (UID: \"b2777bff-2cca-4f41-8655-a737f13b4885\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.117426 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8fxp\" (UniqueName: \"kubernetes.io/projected/4a416a7c-3094-46ef-8370-9cad7446339b-kube-api-access-k8fxp\") pod \"neutron-operator-controller-manager-585dbc889-9vvgb\" (UID: \"4a416a7c-3094-46ef-8370-9cad7446339b\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.117540 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j55lm\" (UniqueName: \"kubernetes.io/projected/2d6f1acc-6416-44ae-9082-3ebe16dce448-kube-api-access-j55lm\") pod \"infra-operator-controller-manager-79955696d6-52vr2\" (UID: \"2d6f1acc-6416-44ae-9082-3ebe16dce448\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.117585 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-628b2\" (UniqueName: \"kubernetes.io/projected/1ad347ea-d2ce-4a1e-912a-8471445396f7-kube-api-access-628b2\") pod \"mariadb-operator-controller-manager-67bf948998-xk52h\" (UID: \"1ad347ea-d2ce-4a1e-912a-8471445396f7\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xk52h" Jan 30 21:34:18 crc kubenswrapper[4751]: E0130 21:34:18.118655 4751 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 21:34:18 crc kubenswrapper[4751]: E0130 21:34:18.118726 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert podName:2d6f1acc-6416-44ae-9082-3ebe16dce448 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:18.61870954 +0000 UTC m=+1197.364532189 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert") pod "infra-operator-controller-manager-79955696d6-52vr2" (UID: "2d6f1acc-6416-44ae-9082-3ebe16dce448") : secret "infra-operator-webhook-server-cert" not found Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.119193 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-n2shb" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.122634 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.124374 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.130692 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-w7wtt" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.136266 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.139954 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.143405 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcnrd\" (UniqueName: \"kubernetes.io/projected/b2777bff-2cca-4f41-8655-a737f13b4885-kube-api-access-gcnrd\") pod \"keystone-operator-controller-manager-84f48565d4-sw6zv\" (UID: \"b2777bff-2cca-4f41-8655-a737f13b4885\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.143645 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-628b2\" (UniqueName: \"kubernetes.io/projected/1ad347ea-d2ce-4a1e-912a-8471445396f7-kube-api-access-628b2\") pod \"mariadb-operator-controller-manager-67bf948998-xk52h\" (UID: \"1ad347ea-d2ce-4a1e-912a-8471445396f7\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xk52h" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.143717 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gn9l\" (UniqueName: \"kubernetes.io/projected/694b29bc-994c-4983-81c7-b32d47db553b-kube-api-access-9gn9l\") pod \"manila-operator-controller-manager-7dd968899f-7sk5v\" (UID: \"694b29bc-994c-4983-81c7-b32d47db553b\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.147732 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j55lm\" (UniqueName: \"kubernetes.io/projected/2d6f1acc-6416-44ae-9082-3ebe16dce448-kube-api-access-j55lm\") pod \"infra-operator-controller-manager-79955696d6-52vr2\" (UID: \"2d6f1acc-6416-44ae-9082-3ebe16dce448\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.148427 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.148657 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-bp5hg" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.157822 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.164650 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.164732 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.166182 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-tbmz5" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.171486 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.194803 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.207413 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.209830 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.213110 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-8jmcf" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.219205 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97x9t\" (UniqueName: \"kubernetes.io/projected/0026e471-8226-4038-8c52-f0add2877c8d-kube-api-access-97x9t\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk\" (UID: \"0026e471-8226-4038-8c52-f0add2877c8d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.219266 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vxlg\" (UniqueName: \"kubernetes.io/projected/fcf49997-888f-4e58-99e7-f1f677dc7111-kube-api-access-6vxlg\") pod \"octavia-operator-controller-manager-6687f8d877-d6slz\" (UID: \"fcf49997-888f-4e58-99e7-f1f677dc7111\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.219304 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szp4h\" (UniqueName: \"kubernetes.io/projected/e596dcc9-7f31-4312-99e3-7d86d318ef9d-kube-api-access-szp4h\") pod \"nova-operator-controller-manager-55bff696bd-tbp7n\" (UID: \"e596dcc9-7f31-4312-99e3-7d86d318ef9d\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.219392 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8fxp\" (UniqueName: \"kubernetes.io/projected/4a416a7c-3094-46ef-8370-9cad7446339b-kube-api-access-k8fxp\") pod \"neutron-operator-controller-manager-585dbc889-9vvgb\" (UID: \"4a416a7c-3094-46ef-8370-9cad7446339b\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.219420 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk\" (UID: \"0026e471-8226-4038-8c52-f0add2877c8d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.219445 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxgxr\" (UniqueName: \"kubernetes.io/projected/c711cf07-a695-447a-8d01-147b10e9059f-kube-api-access-vxgxr\") pod \"ovn-operator-controller-manager-788c46999f-c7tj6\" (UID: \"c711cf07-a695-447a-8d01-147b10e9059f\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.233269 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.238649 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.239795 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.248748 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-jjwmz" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.258550 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8fxp\" (UniqueName: \"kubernetes.io/projected/4a416a7c-3094-46ef-8370-9cad7446339b-kube-api-access-k8fxp\") pod \"neutron-operator-controller-manager-585dbc889-9vvgb\" (UID: \"4a416a7c-3094-46ef-8370-9cad7446339b\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.263967 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.308436 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.309686 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.316677 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-njldh" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.322551 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt5hw\" (UniqueName: \"kubernetes.io/projected/0c86abfd-77a9-4388-8b7f-b61bb378f7cb-kube-api-access-kt5hw\") pod \"swift-operator-controller-manager-68fc8c869-r6smn\" (UID: \"0c86abfd-77a9-4388-8b7f-b61bb378f7cb\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.322612 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk\" (UID: \"0026e471-8226-4038-8c52-f0add2877c8d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.322649 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxgxr\" (UniqueName: \"kubernetes.io/projected/c711cf07-a695-447a-8d01-147b10e9059f-kube-api-access-vxgxr\") pod \"ovn-operator-controller-manager-788c46999f-c7tj6\" (UID: \"c711cf07-a695-447a-8d01-147b10e9059f\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.322726 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6d94\" (UniqueName: \"kubernetes.io/projected/ace28553-76bc-4472-a671-788e1fb9a1ff-kube-api-access-c6d94\") pod \"placement-operator-controller-manager-5b964cf4cd-dx8wk\" (UID: \"ace28553-76bc-4472-a671-788e1fb9a1ff\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.322780 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97x9t\" (UniqueName: \"kubernetes.io/projected/0026e471-8226-4038-8c52-f0add2877c8d-kube-api-access-97x9t\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk\" (UID: \"0026e471-8226-4038-8c52-f0add2877c8d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.322815 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vxlg\" (UniqueName: \"kubernetes.io/projected/fcf49997-888f-4e58-99e7-f1f677dc7111-kube-api-access-6vxlg\") pod \"octavia-operator-controller-manager-6687f8d877-d6slz\" (UID: \"fcf49997-888f-4e58-99e7-f1f677dc7111\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz" Jan 30 21:34:18 crc kubenswrapper[4751]: E0130 21:34:18.322834 4751 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.322858 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szp4h\" (UniqueName: \"kubernetes.io/projected/e596dcc9-7f31-4312-99e3-7d86d318ef9d-kube-api-access-szp4h\") pod \"nova-operator-controller-manager-55bff696bd-tbp7n\" (UID: \"e596dcc9-7f31-4312-99e3-7d86d318ef9d\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n" Jan 30 21:34:18 crc kubenswrapper[4751]: E0130 21:34:18.322885 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert podName:0026e471-8226-4038-8c52-f0add2877c8d nodeName:}" failed. No retries permitted until 2026-01-30 21:34:18.822868093 +0000 UTC m=+1197.568690742 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" (UID: "0026e471-8226-4038-8c52-f0add2877c8d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.327977 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.328815 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.342946 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxgxr\" (UniqueName: \"kubernetes.io/projected/c711cf07-a695-447a-8d01-147b10e9059f-kube-api-access-vxgxr\") pod \"ovn-operator-controller-manager-788c46999f-c7tj6\" (UID: \"c711cf07-a695-447a-8d01-147b10e9059f\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.346774 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szp4h\" (UniqueName: \"kubernetes.io/projected/e596dcc9-7f31-4312-99e3-7d86d318ef9d-kube-api-access-szp4h\") pod \"nova-operator-controller-manager-55bff696bd-tbp7n\" (UID: \"e596dcc9-7f31-4312-99e3-7d86d318ef9d\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.347928 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97x9t\" (UniqueName: \"kubernetes.io/projected/0026e471-8226-4038-8c52-f0add2877c8d-kube-api-access-97x9t\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk\" (UID: \"0026e471-8226-4038-8c52-f0add2877c8d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.354382 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.365064 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xk52h" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.375199 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.405947 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.407503 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.407737 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.415642 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-t5rnt" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.424096 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vxlg\" (UniqueName: \"kubernetes.io/projected/fcf49997-888f-4e58-99e7-f1f677dc7111-kube-api-access-6vxlg\") pod \"octavia-operator-controller-manager-6687f8d877-d6slz\" (UID: \"fcf49997-888f-4e58-99e7-f1f677dc7111\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.425841 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szjsd\" (UniqueName: \"kubernetes.io/projected/3b9cc057-30d7-4a03-8c76-a1ca7200dbae-kube-api-access-szjsd\") pod \"telemetry-operator-controller-manager-6749767b8f-62rqr\" (UID: \"3b9cc057-30d7-4a03-8c76-a1ca7200dbae\") " pod="openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.425894 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt5hw\" (UniqueName: \"kubernetes.io/projected/0c86abfd-77a9-4388-8b7f-b61bb378f7cb-kube-api-access-kt5hw\") pod \"swift-operator-controller-manager-68fc8c869-r6smn\" (UID: \"0c86abfd-77a9-4388-8b7f-b61bb378f7cb\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.425986 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6d94\" (UniqueName: \"kubernetes.io/projected/ace28553-76bc-4472-a671-788e1fb9a1ff-kube-api-access-c6d94\") pod \"placement-operator-controller-manager-5b964cf4cd-dx8wk\" (UID: \"ace28553-76bc-4472-a671-788e1fb9a1ff\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.449084 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.450719 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.503500 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt5hw\" (UniqueName: \"kubernetes.io/projected/0c86abfd-77a9-4388-8b7f-b61bb378f7cb-kube-api-access-kt5hw\") pod \"swift-operator-controller-manager-68fc8c869-r6smn\" (UID: \"0c86abfd-77a9-4388-8b7f-b61bb378f7cb\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.512977 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6d94\" (UniqueName: \"kubernetes.io/projected/ace28553-76bc-4472-a671-788e1fb9a1ff-kube-api-access-c6d94\") pod \"placement-operator-controller-manager-5b964cf4cd-dx8wk\" (UID: \"ace28553-76bc-4472-a671-788e1fb9a1ff\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.528559 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szjsd\" (UniqueName: \"kubernetes.io/projected/3b9cc057-30d7-4a03-8c76-a1ca7200dbae-kube-api-access-szjsd\") pod \"telemetry-operator-controller-manager-6749767b8f-62rqr\" (UID: \"3b9cc057-30d7-4a03-8c76-a1ca7200dbae\") " pod="openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.528668 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcdvn\" (UniqueName: \"kubernetes.io/projected/3d59cc79-1a37-434a-a04b-156739f469d7-kube-api-access-mcdvn\") pod \"test-operator-controller-manager-56f8bfcd9f-sc9gq\" (UID: \"3d59cc79-1a37-434a-a04b-156739f469d7\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.536479 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-gcvgx"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.537603 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-gcvgx" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.540623 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-f42hl" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.565108 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-gcvgx"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.578902 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szjsd\" (UniqueName: \"kubernetes.io/projected/3b9cc057-30d7-4a03-8c76-a1ca7200dbae-kube-api-access-szjsd\") pod \"telemetry-operator-controller-manager-6749767b8f-62rqr\" (UID: \"3b9cc057-30d7-4a03-8c76-a1ca7200dbae\") " pod="openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.588591 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.608111 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.630415 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcdvn\" (UniqueName: \"kubernetes.io/projected/3d59cc79-1a37-434a-a04b-156739f469d7-kube-api-access-mcdvn\") pod \"test-operator-controller-manager-56f8bfcd9f-sc9gq\" (UID: \"3d59cc79-1a37-434a-a04b-156739f469d7\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.630459 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l88tl\" (UniqueName: \"kubernetes.io/projected/cbae5889-938b-4211-94a6-de960df2f95d-kube-api-access-l88tl\") pod \"watcher-operator-controller-manager-564965969-gcvgx\" (UID: \"cbae5889-938b-4211-94a6-de960df2f95d\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-gcvgx" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.630603 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert\") pod \"infra-operator-controller-manager-79955696d6-52vr2\" (UID: \"2d6f1acc-6416-44ae-9082-3ebe16dce448\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:18 crc kubenswrapper[4751]: E0130 21:34:18.631088 4751 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 21:34:18 crc kubenswrapper[4751]: E0130 21:34:18.631138 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert podName:2d6f1acc-6416-44ae-9082-3ebe16dce448 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:19.631122786 +0000 UTC m=+1198.376945435 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert") pod "infra-operator-controller-manager-79955696d6-52vr2" (UID: "2d6f1acc-6416-44ae-9082-3ebe16dce448") : secret "infra-operator-webhook-server-cert" not found Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.633018 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.636424 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.650868 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.663116 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.663765 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-bxxmp" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.664695 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.665469 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.705609 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcdvn\" (UniqueName: \"kubernetes.io/projected/3d59cc79-1a37-434a-a04b-156739f469d7-kube-api-access-mcdvn\") pod \"test-operator-controller-manager-56f8bfcd9f-sc9gq\" (UID: \"3d59cc79-1a37-434a-a04b-156739f469d7\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.713921 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.732214 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.732268 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.732361 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l88tl\" (UniqueName: \"kubernetes.io/projected/cbae5889-938b-4211-94a6-de960df2f95d-kube-api-access-l88tl\") pod \"watcher-operator-controller-manager-564965969-gcvgx\" (UID: \"cbae5889-938b-4211-94a6-de960df2f95d\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-gcvgx" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.732493 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lldbr\" (UniqueName: \"kubernetes.io/projected/dac6f1f3-8549-488c-bb63-aa980f4a1282-kube-api-access-lldbr\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.756507 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.758741 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.762065 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.762669 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-pfv2x" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.762929 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l88tl\" (UniqueName: \"kubernetes.io/projected/cbae5889-938b-4211-94a6-de960df2f95d-kube-api-access-l88tl\") pod \"watcher-operator-controller-manager-564965969-gcvgx\" (UID: \"cbae5889-938b-4211-94a6-de960df2f95d\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-gcvgx" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.843641 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lldbr\" (UniqueName: \"kubernetes.io/projected/dac6f1f3-8549-488c-bb63-aa980f4a1282-kube-api-access-lldbr\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.844057 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk\" (UID: \"0026e471-8226-4038-8c52-f0add2877c8d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.844082 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.844103 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.844222 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p4vl\" (UniqueName: \"kubernetes.io/projected/a986231c-2119-4a13-801d-51119db5d365-kube-api-access-5p4vl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-v8vch\" (UID: \"a986231c-2119-4a13-801d-51119db5d365\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch" Jan 30 21:34:18 crc kubenswrapper[4751]: E0130 21:34:18.844234 4751 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:34:18 crc kubenswrapper[4751]: E0130 21:34:18.844292 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert podName:0026e471-8226-4038-8c52-f0add2877c8d nodeName:}" failed. No retries permitted until 2026-01-30 21:34:19.84427523 +0000 UTC m=+1198.590097879 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" (UID: "0026e471-8226-4038-8c52-f0add2877c8d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:34:18 crc kubenswrapper[4751]: E0130 21:34:18.844311 4751 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 21:34:18 crc kubenswrapper[4751]: E0130 21:34:18.844397 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs podName:dac6f1f3-8549-488c-bb63-aa980f4a1282 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:19.344373352 +0000 UTC m=+1198.090195991 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs") pod "openstack-operator-controller-manager-7d48698d88-jbmh6" (UID: "dac6f1f3-8549-488c-bb63-aa980f4a1282") : secret "metrics-server-cert" not found Jan 30 21:34:18 crc kubenswrapper[4751]: E0130 21:34:18.844410 4751 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 21:34:18 crc kubenswrapper[4751]: E0130 21:34:18.844467 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs podName:dac6f1f3-8549-488c-bb63-aa980f4a1282 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:19.344451754 +0000 UTC m=+1198.090274403 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs") pod "openstack-operator-controller-manager-7d48698d88-jbmh6" (UID: "dac6f1f3-8549-488c-bb63-aa980f4a1282") : secret "webhook-server-cert" not found Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.869673 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lldbr\" (UniqueName: \"kubernetes.io/projected/dac6f1f3-8549-488c-bb63-aa980f4a1282-kube-api-access-lldbr\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.882912 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7mpjw"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.911964 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.945665 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p4vl\" (UniqueName: \"kubernetes.io/projected/a986231c-2119-4a13-801d-51119db5d365-kube-api-access-5p4vl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-v8vch\" (UID: \"a986231c-2119-4a13-801d-51119db5d365\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.968036 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p4vl\" (UniqueName: \"kubernetes.io/projected/a986231c-2119-4a13-801d-51119db5d365-kube-api-access-5p4vl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-v8vch\" (UID: \"a986231c-2119-4a13-801d-51119db5d365\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.969307 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-gcvgx" Jan 30 21:34:19 crc kubenswrapper[4751]: I0130 21:34:19.103256 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch" Jan 30 21:34:19 crc kubenswrapper[4751]: I0130 21:34:19.353237 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:19 crc kubenswrapper[4751]: E0130 21:34:19.353427 4751 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 21:34:19 crc kubenswrapper[4751]: I0130 21:34:19.353598 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:19 crc kubenswrapper[4751]: E0130 21:34:19.353677 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs podName:dac6f1f3-8549-488c-bb63-aa980f4a1282 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:20.353652045 +0000 UTC m=+1199.099474724 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs") pod "openstack-operator-controller-manager-7d48698d88-jbmh6" (UID: "dac6f1f3-8549-488c-bb63-aa980f4a1282") : secret "metrics-server-cert" not found Jan 30 21:34:19 crc kubenswrapper[4751]: E0130 21:34:19.353843 4751 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 21:34:19 crc kubenswrapper[4751]: E0130 21:34:19.353895 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs podName:dac6f1f3-8549-488c-bb63-aa980f4a1282 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:20.353879401 +0000 UTC m=+1199.099702050 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs") pod "openstack-operator-controller-manager-7d48698d88-jbmh6" (UID: "dac6f1f3-8549-488c-bb63-aa980f4a1282") : secret "webhook-server-cert" not found Jan 30 21:34:19 crc kubenswrapper[4751]: I0130 21:34:19.606965 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7mpjw" event={"ID":"236db419-e197-4a85-ab49-58cf38babea6","Type":"ContainerStarted","Data":"2ba4406bb67582ee9f558fbfbada8121f5b51f13670b02de497c442bd9ca8830"} Jan 30 21:34:19 crc kubenswrapper[4751]: I0130 21:34:19.658582 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert\") pod \"infra-operator-controller-manager-79955696d6-52vr2\" (UID: \"2d6f1acc-6416-44ae-9082-3ebe16dce448\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:19 crc kubenswrapper[4751]: E0130 21:34:19.658746 4751 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 21:34:19 crc kubenswrapper[4751]: E0130 21:34:19.658817 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert podName:2d6f1acc-6416-44ae-9082-3ebe16dce448 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:21.658798946 +0000 UTC m=+1200.404621605 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert") pod "infra-operator-controller-manager-79955696d6-52vr2" (UID: "2d6f1acc-6416-44ae-9082-3ebe16dce448") : secret "infra-operator-webhook-server-cert" not found Jan 30 21:34:19 crc kubenswrapper[4751]: I0130 21:34:19.828384 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf"] Jan 30 21:34:19 crc kubenswrapper[4751]: I0130 21:34:19.841848 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-b65fl"] Jan 30 21:34:19 crc kubenswrapper[4751]: W0130 21:34:19.848668 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9003ffe6_59a3_4c7c_96d0_d129a9339247.slice/crio-9329585d872defd623d97d0739b0dd49e03a2c1f7d6f7bce192b2b4edc0d9bbc WatchSource:0}: Error finding container 9329585d872defd623d97d0739b0dd49e03a2c1f7d6f7bce192b2b4edc0d9bbc: Status 404 returned error can't find the container with id 9329585d872defd623d97d0739b0dd49e03a2c1f7d6f7bce192b2b4edc0d9bbc Jan 30 21:34:19 crc kubenswrapper[4751]: W0130 21:34:19.859911 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fd5051a_5be4_4336_af86_9674469b76a0.slice/crio-689fe8529e2dd72a517cf93755e7a0a7b52870527ec64ba468c8c4f7783b4573 WatchSource:0}: Error finding container 689fe8529e2dd72a517cf93755e7a0a7b52870527ec64ba468c8c4f7783b4573: Status 404 returned error can't find the container with id 689fe8529e2dd72a517cf93755e7a0a7b52870527ec64ba468c8c4f7783b4573 Jan 30 21:34:19 crc kubenswrapper[4751]: I0130 21:34:19.864505 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf"] Jan 30 21:34:19 crc kubenswrapper[4751]: I0130 21:34:19.877843 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r"] Jan 30 21:34:19 crc kubenswrapper[4751]: I0130 21:34:19.878635 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk\" (UID: \"0026e471-8226-4038-8c52-f0add2877c8d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:19 crc kubenswrapper[4751]: E0130 21:34:19.878799 4751 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:34:19 crc kubenswrapper[4751]: E0130 21:34:19.878862 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert podName:0026e471-8226-4038-8c52-f0add2877c8d nodeName:}" failed. No retries permitted until 2026-01-30 21:34:21.878843794 +0000 UTC m=+1200.624666443 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" (UID: "0026e471-8226-4038-8c52-f0add2877c8d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.111463 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv"] Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.120594 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n"] Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.144560 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr"] Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.152657 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v"] Jan 30 21:34:20 crc kubenswrapper[4751]: W0130 21:34:20.163793 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ad347ea_d2ce_4a1e_912a_8471445396f7.slice/crio-8c40aef27db5390a879bcf0125d770c1b581301fd2cfaa6fa313216470157654 WatchSource:0}: Error finding container 8c40aef27db5390a879bcf0125d770c1b581301fd2cfaa6fa313216470157654: Status 404 returned error can't find the container with id 8c40aef27db5390a879bcf0125d770c1b581301fd2cfaa6fa313216470157654 Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.175520 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-xk52h"] Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.181377 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-n2shb"] Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.388271 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.388315 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:20 crc kubenswrapper[4751]: E0130 21:34:20.388467 4751 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 21:34:20 crc kubenswrapper[4751]: E0130 21:34:20.388512 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs podName:dac6f1f3-8549-488c-bb63-aa980f4a1282 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:22.388498847 +0000 UTC m=+1201.134321496 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs") pod "openstack-operator-controller-manager-7d48698d88-jbmh6" (UID: "dac6f1f3-8549-488c-bb63-aa980f4a1282") : secret "webhook-server-cert" not found Jan 30 21:34:20 crc kubenswrapper[4751]: E0130 21:34:20.388796 4751 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 21:34:20 crc kubenswrapper[4751]: E0130 21:34:20.388818 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs podName:dac6f1f3-8549-488c-bb63-aa980f4a1282 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:22.388811575 +0000 UTC m=+1201.134634224 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs") pod "openstack-operator-controller-manager-7d48698d88-jbmh6" (UID: "dac6f1f3-8549-488c-bb63-aa980f4a1282") : secret "metrics-server-cert" not found Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.621530 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr" event={"ID":"0b3a96d4-f5fc-47be-9c28-47239b2488c1","Type":"ContainerStarted","Data":"91b23c6544d6bed27d5b81117dea91eb3f50aa21e109430b6526723075cb41a3"} Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.622630 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf" event={"ID":"f8cf0eb3-a93d-4462-b5ac-bbaaebf6daf9","Type":"ContainerStarted","Data":"5caed7165150755b34445e56c40ef0252960893bc2b1e3802b4236f5e7a84ac9"} Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.623925 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r" event={"ID":"9003ffe6-59a3-4c7c-96d0-d129a9339247","Type":"ContainerStarted","Data":"9329585d872defd623d97d0739b0dd49e03a2c1f7d6f7bce192b2b4edc0d9bbc"} Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.624819 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-n2shb" event={"ID":"9a88f139-89db-4b3a-8fea-bf951e59f564","Type":"ContainerStarted","Data":"7e327cfa635d28359b8acf63438541f92264d3c506a6bf8242ee4c751c8376ed"} Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.625877 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v" event={"ID":"694b29bc-994c-4983-81c7-b32d47db553b","Type":"ContainerStarted","Data":"5dfb8991a4f5c5de4364e1cd9bccebd6806d738f0d0cc85cecc23600ec73f524"} Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.627352 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf" event={"ID":"3fae5204-d3a1-4e39-ac3d-d28c8a55c7db","Type":"ContainerStarted","Data":"c8a1058cffeb24b2b928fa9fd7519b5d8864f41396346c06970ae212d20c48cc"} Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.628673 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n" event={"ID":"e596dcc9-7f31-4312-99e3-7d86d318ef9d","Type":"ContainerStarted","Data":"9a1ec0ead61cc5cd826574c047b6963a5dee629a4955684d0b231df84b4ca606"} Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.629723 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xk52h" event={"ID":"1ad347ea-d2ce-4a1e-912a-8471445396f7","Type":"ContainerStarted","Data":"8c40aef27db5390a879bcf0125d770c1b581301fd2cfaa6fa313216470157654"} Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.630849 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-b65fl" event={"ID":"0fd5051a-5be4-4336-af86-9674469b76a0","Type":"ContainerStarted","Data":"689fe8529e2dd72a517cf93755e7a0a7b52870527ec64ba468c8c4f7783b4573"} Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.631914 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv" event={"ID":"b2777bff-2cca-4f41-8655-a737f13b4885","Type":"ContainerStarted","Data":"a240e046249b286069b7324a2b9ef995899c6ea84f6ff40bca43281f3add3062"} Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.732976 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz"] Jan 30 21:34:20 crc kubenswrapper[4751]: W0130 21:34:20.758934 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcf49997_888f_4e58_99e7_f1f677dc7111.slice/crio-9b82bd9a8cb32cce1497f99de7e84e36df6e793134aabedb5e1080d55731e87c WatchSource:0}: Error finding container 9b82bd9a8cb32cce1497f99de7e84e36df6e793134aabedb5e1080d55731e87c: Status 404 returned error can't find the container with id 9b82bd9a8cb32cce1497f99de7e84e36df6e793134aabedb5e1080d55731e87c Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.760526 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn"] Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.777797 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq"] Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.790781 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-gcvgx"] Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.796968 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk"] Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.808920 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch"] Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.819620 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr"] Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.826277 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb"] Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.833843 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6"] Jan 30 21:34:20 crc kubenswrapper[4751]: W0130 21:34:20.842385 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda986231c_2119_4a13_801d_51119db5d365.slice/crio-193e672f303b83e1d934fcbe9df43481356ac6434595e5751867078a1e21c908 WatchSource:0}: Error finding container 193e672f303b83e1d934fcbe9df43481356ac6434595e5751867078a1e21c908: Status 404 returned error can't find the container with id 193e672f303b83e1d934fcbe9df43481356ac6434595e5751867078a1e21c908 Jan 30 21:34:20 crc kubenswrapper[4751]: W0130 21:34:20.843608 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b9cc057_30d7_4a03_8c76_a1ca7200dbae.slice/crio-989f1c0433c76638fa747641936514fc38831afa8e383f773ba81a8ba115bd12 WatchSource:0}: Error finding container 989f1c0433c76638fa747641936514fc38831afa8e383f773ba81a8ba115bd12: Status 404 returned error can't find the container with id 989f1c0433c76638fa747641936514fc38831afa8e383f773ba81a8ba115bd12 Jan 30 21:34:20 crc kubenswrapper[4751]: E0130 21:34:20.853351 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5p4vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-v8vch_openstack-operators(a986231c-2119-4a13-801d-51119db5d365): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 21:34:20 crc kubenswrapper[4751]: W0130 21:34:20.853433 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbae5889_938b_4211_94a6_de960df2f95d.slice/crio-058882ebb92652797bf4059a5aaae63daa2c9f92ef617ccdca1030ce3c961fe1 WatchSource:0}: Error finding container 058882ebb92652797bf4059a5aaae63daa2c9f92ef617ccdca1030ce3c961fe1: Status 404 returned error can't find the container with id 058882ebb92652797bf4059a5aaae63daa2c9f92ef617ccdca1030ce3c961fe1 Jan 30 21:34:20 crc kubenswrapper[4751]: E0130 21:34:20.854471 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch" podUID="a986231c-2119-4a13-801d-51119db5d365" Jan 30 21:34:20 crc kubenswrapper[4751]: E0130 21:34:20.855892 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l88tl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-gcvgx_openstack-operators(cbae5889-938b-4211-94a6-de960df2f95d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 21:34:20 crc kubenswrapper[4751]: E0130 21:34:20.857387 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-gcvgx" podUID="cbae5889-938b-4211-94a6-de960df2f95d" Jan 30 21:34:20 crc kubenswrapper[4751]: E0130 21:34:20.863249 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c6d94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-dx8wk_openstack-operators(ace28553-76bc-4472-a671-788e1fb9a1ff): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 21:34:20 crc kubenswrapper[4751]: E0130 21:34:20.864383 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk" podUID="ace28553-76bc-4472-a671-788e1fb9a1ff" Jan 30 21:34:20 crc kubenswrapper[4751]: W0130 21:34:20.876302 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a416a7c_3094_46ef_8370_9cad7446339b.slice/crio-0094b7b3dfbc9734b6276513985162aa04d0ae49e6122a38568badfe07d38017 WatchSource:0}: Error finding container 0094b7b3dfbc9734b6276513985162aa04d0ae49e6122a38568badfe07d38017: Status 404 returned error can't find the container with id 0094b7b3dfbc9734b6276513985162aa04d0ae49e6122a38568badfe07d38017 Jan 30 21:34:20 crc kubenswrapper[4751]: E0130 21:34:20.887297 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k8fxp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-9vvgb_openstack-operators(4a416a7c-3094-46ef-8370-9cad7446339b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 21:34:20 crc kubenswrapper[4751]: E0130 21:34:20.888930 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" podUID="4a416a7c-3094-46ef-8370-9cad7446339b" Jan 30 21:34:21 crc kubenswrapper[4751]: I0130 21:34:21.658362 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq" event={"ID":"3d59cc79-1a37-434a-a04b-156739f469d7","Type":"ContainerStarted","Data":"4aea079f16bcf7802778f38af2a9756734fe7496a2f9ccf4969bdd2adab93087"} Jan 30 21:34:21 crc kubenswrapper[4751]: I0130 21:34:21.660409 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6" event={"ID":"c711cf07-a695-447a-8d01-147b10e9059f","Type":"ContainerStarted","Data":"89940a894ad22cc1d199b4183fb29cc1f4c3fd68af2eeca7f427cd9a5c8691c3"} Jan 30 21:34:21 crc kubenswrapper[4751]: I0130 21:34:21.661841 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn" event={"ID":"0c86abfd-77a9-4388-8b7f-b61bb378f7cb","Type":"ContainerStarted","Data":"ddfb2961f9e21ec6546a36edfdfd8a87e77634181bf1a823108169a8d3b1a2bc"} Jan 30 21:34:21 crc kubenswrapper[4751]: I0130 21:34:21.664644 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch" event={"ID":"a986231c-2119-4a13-801d-51119db5d365","Type":"ContainerStarted","Data":"193e672f303b83e1d934fcbe9df43481356ac6434595e5751867078a1e21c908"} Jan 30 21:34:21 crc kubenswrapper[4751]: E0130 21:34:21.665972 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch" podUID="a986231c-2119-4a13-801d-51119db5d365" Jan 30 21:34:21 crc kubenswrapper[4751]: I0130 21:34:21.666763 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-gcvgx" event={"ID":"cbae5889-938b-4211-94a6-de960df2f95d","Type":"ContainerStarted","Data":"058882ebb92652797bf4059a5aaae63daa2c9f92ef617ccdca1030ce3c961fe1"} Jan 30 21:34:21 crc kubenswrapper[4751]: I0130 21:34:21.670778 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz" event={"ID":"fcf49997-888f-4e58-99e7-f1f677dc7111","Type":"ContainerStarted","Data":"9b82bd9a8cb32cce1497f99de7e84e36df6e793134aabedb5e1080d55731e87c"} Jan 30 21:34:21 crc kubenswrapper[4751]: E0130 21:34:21.671854 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-gcvgx" podUID="cbae5889-938b-4211-94a6-de960df2f95d" Jan 30 21:34:21 crc kubenswrapper[4751]: I0130 21:34:21.671958 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" event={"ID":"4a416a7c-3094-46ef-8370-9cad7446339b","Type":"ContainerStarted","Data":"0094b7b3dfbc9734b6276513985162aa04d0ae49e6122a38568badfe07d38017"} Jan 30 21:34:21 crc kubenswrapper[4751]: I0130 21:34:21.673395 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr" event={"ID":"3b9cc057-30d7-4a03-8c76-a1ca7200dbae","Type":"ContainerStarted","Data":"989f1c0433c76638fa747641936514fc38831afa8e383f773ba81a8ba115bd12"} Jan 30 21:34:21 crc kubenswrapper[4751]: E0130 21:34:21.674496 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" podUID="4a416a7c-3094-46ef-8370-9cad7446339b" Jan 30 21:34:21 crc kubenswrapper[4751]: I0130 21:34:21.680111 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk" event={"ID":"ace28553-76bc-4472-a671-788e1fb9a1ff","Type":"ContainerStarted","Data":"d8186f89b686033afc03bed94516f283c26dfaa0d4533f55008bedabb5350c3a"} Jan 30 21:34:21 crc kubenswrapper[4751]: E0130 21:34:21.682627 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk" podUID="ace28553-76bc-4472-a671-788e1fb9a1ff" Jan 30 21:34:21 crc kubenswrapper[4751]: I0130 21:34:21.726073 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert\") pod \"infra-operator-controller-manager-79955696d6-52vr2\" (UID: \"2d6f1acc-6416-44ae-9082-3ebe16dce448\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:21 crc kubenswrapper[4751]: E0130 21:34:21.727702 4751 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 21:34:21 crc kubenswrapper[4751]: E0130 21:34:21.727784 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert podName:2d6f1acc-6416-44ae-9082-3ebe16dce448 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:25.727765888 +0000 UTC m=+1204.473588537 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert") pod "infra-operator-controller-manager-79955696d6-52vr2" (UID: "2d6f1acc-6416-44ae-9082-3ebe16dce448") : secret "infra-operator-webhook-server-cert" not found Jan 30 21:34:21 crc kubenswrapper[4751]: I0130 21:34:21.937267 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk\" (UID: \"0026e471-8226-4038-8c52-f0add2877c8d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:21 crc kubenswrapper[4751]: E0130 21:34:21.937715 4751 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:34:21 crc kubenswrapper[4751]: E0130 21:34:21.937948 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert podName:0026e471-8226-4038-8c52-f0add2877c8d nodeName:}" failed. No retries permitted until 2026-01-30 21:34:25.937931192 +0000 UTC m=+1204.683753841 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" (UID: "0026e471-8226-4038-8c52-f0add2877c8d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:34:22 crc kubenswrapper[4751]: I0130 21:34:22.461610 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:22 crc kubenswrapper[4751]: I0130 21:34:22.461925 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:22 crc kubenswrapper[4751]: E0130 21:34:22.461778 4751 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 21:34:22 crc kubenswrapper[4751]: E0130 21:34:22.462039 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs podName:dac6f1f3-8549-488c-bb63-aa980f4a1282 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:26.46202191 +0000 UTC m=+1205.207844559 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs") pod "openstack-operator-controller-manager-7d48698d88-jbmh6" (UID: "dac6f1f3-8549-488c-bb63-aa980f4a1282") : secret "metrics-server-cert" not found Jan 30 21:34:22 crc kubenswrapper[4751]: E0130 21:34:22.462119 4751 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 21:34:22 crc kubenswrapper[4751]: E0130 21:34:22.462172 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs podName:dac6f1f3-8549-488c-bb63-aa980f4a1282 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:26.462157164 +0000 UTC m=+1205.207979813 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs") pod "openstack-operator-controller-manager-7d48698d88-jbmh6" (UID: "dac6f1f3-8549-488c-bb63-aa980f4a1282") : secret "webhook-server-cert" not found Jan 30 21:34:22 crc kubenswrapper[4751]: E0130 21:34:22.694792 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk" podUID="ace28553-76bc-4472-a671-788e1fb9a1ff" Jan 30 21:34:22 crc kubenswrapper[4751]: E0130 21:34:22.695085 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-gcvgx" podUID="cbae5889-938b-4211-94a6-de960df2f95d" Jan 30 21:34:22 crc kubenswrapper[4751]: E0130 21:34:22.695121 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" podUID="4a416a7c-3094-46ef-8370-9cad7446339b" Jan 30 21:34:22 crc kubenswrapper[4751]: E0130 21:34:22.695168 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch" podUID="a986231c-2119-4a13-801d-51119db5d365" Jan 30 21:34:22 crc kubenswrapper[4751]: I0130 21:34:22.799198 4751 scope.go:117] "RemoveContainer" containerID="a1a33ec969e1c6d383f8048ab15fcd257712831d15c58fa7001702dde20fdac5" Jan 30 21:34:23 crc kubenswrapper[4751]: I0130 21:34:23.045335 4751 scope.go:117] "RemoveContainer" containerID="cde0bba9b5bd705e79427c82f01801f8fb8f078a030c2d0c0c73c34abe57027a" Jan 30 21:34:25 crc kubenswrapper[4751]: I0130 21:34:25.827080 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert\") pod \"infra-operator-controller-manager-79955696d6-52vr2\" (UID: \"2d6f1acc-6416-44ae-9082-3ebe16dce448\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:25 crc kubenswrapper[4751]: E0130 21:34:25.827269 4751 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 21:34:25 crc kubenswrapper[4751]: E0130 21:34:25.827597 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert podName:2d6f1acc-6416-44ae-9082-3ebe16dce448 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:33.827575884 +0000 UTC m=+1212.573398533 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert") pod "infra-operator-controller-manager-79955696d6-52vr2" (UID: "2d6f1acc-6416-44ae-9082-3ebe16dce448") : secret "infra-operator-webhook-server-cert" not found Jan 30 21:34:26 crc kubenswrapper[4751]: I0130 21:34:26.030572 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk\" (UID: \"0026e471-8226-4038-8c52-f0add2877c8d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:26 crc kubenswrapper[4751]: E0130 21:34:26.030845 4751 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:34:26 crc kubenswrapper[4751]: E0130 21:34:26.030903 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert podName:0026e471-8226-4038-8c52-f0add2877c8d nodeName:}" failed. No retries permitted until 2026-01-30 21:34:34.030887005 +0000 UTC m=+1212.776709654 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" (UID: "0026e471-8226-4038-8c52-f0add2877c8d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:34:26 crc kubenswrapper[4751]: I0130 21:34:26.538434 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:26 crc kubenswrapper[4751]: I0130 21:34:26.538484 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:26 crc kubenswrapper[4751]: E0130 21:34:26.538620 4751 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 21:34:26 crc kubenswrapper[4751]: E0130 21:34:26.538635 4751 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 21:34:26 crc kubenswrapper[4751]: E0130 21:34:26.538697 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs podName:dac6f1f3-8549-488c-bb63-aa980f4a1282 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:34.538658717 +0000 UTC m=+1213.284481366 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs") pod "openstack-operator-controller-manager-7d48698d88-jbmh6" (UID: "dac6f1f3-8549-488c-bb63-aa980f4a1282") : secret "webhook-server-cert" not found Jan 30 21:34:26 crc kubenswrapper[4751]: E0130 21:34:26.538714 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs podName:dac6f1f3-8549-488c-bb63-aa980f4a1282 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:34.538707188 +0000 UTC m=+1213.284529837 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs") pod "openstack-operator-controller-manager-7d48698d88-jbmh6" (UID: "dac6f1f3-8549-488c-bb63-aa980f4a1282") : secret "metrics-server-cert" not found Jan 30 21:34:33 crc kubenswrapper[4751]: E0130 21:34:33.094680 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382" Jan 30 21:34:33 crc kubenswrapper[4751]: E0130 21:34:33.095521 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wqgfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d9697b7f4-ph5lf_openstack-operators(f8cf0eb3-a93d-4462-b5ac-bbaaebf6daf9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:33 crc kubenswrapper[4751]: E0130 21:34:33.096783 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf" podUID="f8cf0eb3-a93d-4462-b5ac-bbaaebf6daf9" Jan 30 21:34:33 crc kubenswrapper[4751]: E0130 21:34:33.779655 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf" podUID="f8cf0eb3-a93d-4462-b5ac-bbaaebf6daf9" Jan 30 21:34:33 crc kubenswrapper[4751]: I0130 21:34:33.885102 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert\") pod \"infra-operator-controller-manager-79955696d6-52vr2\" (UID: \"2d6f1acc-6416-44ae-9082-3ebe16dce448\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:33 crc kubenswrapper[4751]: E0130 21:34:33.885366 4751 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 21:34:33 crc kubenswrapper[4751]: E0130 21:34:33.885476 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert podName:2d6f1acc-6416-44ae-9082-3ebe16dce448 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:49.885448409 +0000 UTC m=+1228.631271158 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert") pod "infra-operator-controller-manager-79955696d6-52vr2" (UID: "2d6f1acc-6416-44ae-9082-3ebe16dce448") : secret "infra-operator-webhook-server-cert" not found Jan 30 21:34:34 crc kubenswrapper[4751]: I0130 21:34:34.088937 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk\" (UID: \"0026e471-8226-4038-8c52-f0add2877c8d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:34 crc kubenswrapper[4751]: E0130 21:34:34.090343 4751 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:34:34 crc kubenswrapper[4751]: E0130 21:34:34.090549 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert podName:0026e471-8226-4038-8c52-f0add2877c8d nodeName:}" failed. No retries permitted until 2026-01-30 21:34:50.090536028 +0000 UTC m=+1228.836358677 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" (UID: "0026e471-8226-4038-8c52-f0add2877c8d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:34:34 crc kubenswrapper[4751]: I0130 21:34:34.598042 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:34 crc kubenswrapper[4751]: I0130 21:34:34.598088 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:34 crc kubenswrapper[4751]: E0130 21:34:34.598308 4751 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 21:34:34 crc kubenswrapper[4751]: E0130 21:34:34.598416 4751 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 21:34:34 crc kubenswrapper[4751]: E0130 21:34:34.598419 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs podName:dac6f1f3-8549-488c-bb63-aa980f4a1282 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:50.598396262 +0000 UTC m=+1229.344218991 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs") pod "openstack-operator-controller-manager-7d48698d88-jbmh6" (UID: "dac6f1f3-8549-488c-bb63-aa980f4a1282") : secret "metrics-server-cert" not found Jan 30 21:34:34 crc kubenswrapper[4751]: E0130 21:34:34.598481 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs podName:dac6f1f3-8549-488c-bb63-aa980f4a1282 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:50.598466264 +0000 UTC m=+1229.344288913 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs") pod "openstack-operator-controller-manager-7d48698d88-jbmh6" (UID: "dac6f1f3-8549-488c-bb63-aa980f4a1282") : secret "webhook-server-cert" not found Jan 30 21:34:34 crc kubenswrapper[4751]: E0130 21:34:34.657347 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898" Jan 30 21:34:34 crc kubenswrapper[4751]: E0130 21:34:34.657558 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rccf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-8d874c8fc-6fg4r_openstack-operators(9003ffe6-59a3-4c7c-96d0-d129a9339247): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:34 crc kubenswrapper[4751]: E0130 21:34:34.659679 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r" podUID="9003ffe6-59a3-4c7c-96d0-d129a9339247" Jan 30 21:34:34 crc kubenswrapper[4751]: E0130 21:34:34.783500 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r" podUID="9003ffe6-59a3-4c7c-96d0-d129a9339247" Jan 30 21:34:36 crc kubenswrapper[4751]: E0130 21:34:36.674705 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Jan 30 21:34:36 crc kubenswrapper[4751]: E0130 21:34:36.675151 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9gn9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-7sk5v_openstack-operators(694b29bc-994c-4983-81c7-b32d47db553b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:36 crc kubenswrapper[4751]: E0130 21:34:36.676386 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v" podUID="694b29bc-994c-4983-81c7-b32d47db553b" Jan 30 21:34:36 crc kubenswrapper[4751]: E0130 21:34:36.800269 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v" podUID="694b29bc-994c-4983-81c7-b32d47db553b" Jan 30 21:34:37 crc kubenswrapper[4751]: E0130 21:34:37.146609 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10" Jan 30 21:34:37 crc kubenswrapper[4751]: E0130 21:34:37.146780 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4nbx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69d6db494d-jxkmf_openstack-operators(3fae5204-d3a1-4e39-ac3d-d28c8a55c7db): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:37 crc kubenswrapper[4751]: E0130 21:34:37.148796 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf" podUID="3fae5204-d3a1-4e39-ac3d-d28c8a55c7db" Jan 30 21:34:37 crc kubenswrapper[4751]: E0130 21:34:37.685624 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Jan 30 21:34:37 crc kubenswrapper[4751]: E0130 21:34:37.685894 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4zc9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-hsbbr_openstack-operators(0b3a96d4-f5fc-47be-9c28-47239b2488c1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:37 crc kubenswrapper[4751]: E0130 21:34:37.687108 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr" podUID="0b3a96d4-f5fc-47be-9c28-47239b2488c1" Jan 30 21:34:37 crc kubenswrapper[4751]: E0130 21:34:37.809198 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr" podUID="0b3a96d4-f5fc-47be-9c28-47239b2488c1" Jan 30 21:34:37 crc kubenswrapper[4751]: E0130 21:34:37.809635 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf" podUID="3fae5204-d3a1-4e39-ac3d-d28c8a55c7db" Jan 30 21:34:41 crc kubenswrapper[4751]: E0130 21:34:41.964961 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be" Jan 30 21:34:41 crc kubenswrapper[4751]: E0130 21:34:41.967231 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6vxlg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-d6slz_openstack-operators(fcf49997-888f-4e58-99e7-f1f677dc7111): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:41 crc kubenswrapper[4751]: E0130 21:34:41.968873 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz" podUID="fcf49997-888f-4e58-99e7-f1f677dc7111" Jan 30 21:34:42 crc kubenswrapper[4751]: E0130 21:34:42.477457 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241" Jan 30 21:34:42 crc kubenswrapper[4751]: E0130 21:34:42.478285 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mcdvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-sc9gq_openstack-operators(3d59cc79-1a37-434a-a04b-156739f469d7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:42 crc kubenswrapper[4751]: E0130 21:34:42.480744 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq" podUID="3d59cc79-1a37-434a-a04b-156739f469d7" Jan 30 21:34:42 crc kubenswrapper[4751]: E0130 21:34:42.853167 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq" podUID="3d59cc79-1a37-434a-a04b-156739f469d7" Jan 30 21:34:42 crc kubenswrapper[4751]: E0130 21:34:42.853330 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz" podUID="fcf49997-888f-4e58-99e7-f1f677dc7111" Jan 30 21:34:44 crc kubenswrapper[4751]: E0130 21:34:44.215760 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4" Jan 30 21:34:44 crc kubenswrapper[4751]: E0130 21:34:44.216006 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vxgxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-c7tj6_openstack-operators(c711cf07-a695-447a-8d01-147b10e9059f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:44 crc kubenswrapper[4751]: E0130 21:34:44.218423 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6" podUID="c711cf07-a695-447a-8d01-147b10e9059f" Jan 30 21:34:44 crc kubenswrapper[4751]: E0130 21:34:44.810983 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382" Jan 30 21:34:44 crc kubenswrapper[4751]: E0130 21:34:44.811426 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kt5hw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-r6smn_openstack-operators(0c86abfd-77a9-4388-8b7f-b61bb378f7cb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:44 crc kubenswrapper[4751]: E0130 21:34:44.813118 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn" podUID="0c86abfd-77a9-4388-8b7f-b61bb378f7cb" Jan 30 21:34:44 crc kubenswrapper[4751]: E0130 21:34:44.886875 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn" podUID="0c86abfd-77a9-4388-8b7f-b61bb378f7cb" Jan 30 21:34:44 crc kubenswrapper[4751]: E0130 21:34:44.887153 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6" podUID="c711cf07-a695-447a-8d01-147b10e9059f" Jan 30 21:34:44 crc kubenswrapper[4751]: I0130 21:34:44.887310 4751 scope.go:117] "RemoveContainer" containerID="4bac6aed72495d5a47025b1229e37fd0256684ee83fbbdb6b3d50f1e0a5fc0c5" Jan 30 21:34:44 crc kubenswrapper[4751]: E0130 21:34:44.893396 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/openstack-k8s-operators/telemetry-operator:a5bcf05e2d71c610156d017fdf197f7c58570d79" Jan 30 21:34:44 crc kubenswrapper[4751]: E0130 21:34:44.893579 4751 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/openstack-k8s-operators/telemetry-operator:a5bcf05e2d71c610156d017fdf197f7c58570d79" Jan 30 21:34:44 crc kubenswrapper[4751]: E0130 21:34:44.893770 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.103:5001/openstack-k8s-operators/telemetry-operator:a5bcf05e2d71c610156d017fdf197f7c58570d79,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-szjsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6749767b8f-62rqr_openstack-operators(3b9cc057-30d7-4a03-8c76-a1ca7200dbae): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:44 crc kubenswrapper[4751]: E0130 21:34:44.895170 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr" podUID="3b9cc057-30d7-4a03-8c76-a1ca7200dbae" Jan 30 21:34:45 crc kubenswrapper[4751]: E0130 21:34:45.566422 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Jan 30 21:34:45 crc kubenswrapper[4751]: E0130 21:34:45.566811 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-szp4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-tbp7n_openstack-operators(e596dcc9-7f31-4312-99e3-7d86d318ef9d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:45 crc kubenswrapper[4751]: E0130 21:34:45.568636 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n" podUID="e596dcc9-7f31-4312-99e3-7d86d318ef9d" Jan 30 21:34:45 crc kubenswrapper[4751]: E0130 21:34:45.893541 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n" podUID="e596dcc9-7f31-4312-99e3-7d86d318ef9d" Jan 30 21:34:45 crc kubenswrapper[4751]: E0130 21:34:45.893608 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.103:5001/openstack-k8s-operators/telemetry-operator:a5bcf05e2d71c610156d017fdf197f7c58570d79\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr" podUID="3b9cc057-30d7-4a03-8c76-a1ca7200dbae" Jan 30 21:34:46 crc kubenswrapper[4751]: E0130 21:34:46.211405 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Jan 30 21:34:46 crc kubenswrapper[4751]: E0130 21:34:46.211676 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gcnrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-sw6zv_openstack-operators(b2777bff-2cca-4f41-8655-a737f13b4885): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:46 crc kubenswrapper[4751]: E0130 21:34:46.212968 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv" podUID="b2777bff-2cca-4f41-8655-a737f13b4885" Jan 30 21:34:46 crc kubenswrapper[4751]: E0130 21:34:46.902237 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv" podUID="b2777bff-2cca-4f41-8655-a737f13b4885" Jan 30 21:34:49 crc kubenswrapper[4751]: I0130 21:34:49.942329 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert\") pod \"infra-operator-controller-manager-79955696d6-52vr2\" (UID: \"2d6f1acc-6416-44ae-9082-3ebe16dce448\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:49 crc kubenswrapper[4751]: I0130 21:34:49.955818 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert\") pod \"infra-operator-controller-manager-79955696d6-52vr2\" (UID: \"2d6f1acc-6416-44ae-9082-3ebe16dce448\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:50 crc kubenswrapper[4751]: I0130 21:34:50.027418 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-p7wfp" Jan 30 21:34:50 crc kubenswrapper[4751]: I0130 21:34:50.036072 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:50 crc kubenswrapper[4751]: I0130 21:34:50.147728 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk\" (UID: \"0026e471-8226-4038-8c52-f0add2877c8d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:50 crc kubenswrapper[4751]: I0130 21:34:50.181254 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk\" (UID: \"0026e471-8226-4038-8c52-f0add2877c8d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:50 crc kubenswrapper[4751]: I0130 21:34:50.266908 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-bp5hg" Jan 30 21:34:50 crc kubenswrapper[4751]: I0130 21:34:50.276033 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:50 crc kubenswrapper[4751]: I0130 21:34:50.656269 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:50 crc kubenswrapper[4751]: I0130 21:34:50.656308 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:50 crc kubenswrapper[4751]: I0130 21:34:50.662143 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:50 crc kubenswrapper[4751]: I0130 21:34:50.664642 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:50 crc kubenswrapper[4751]: I0130 21:34:50.866262 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-bxxmp" Jan 30 21:34:50 crc kubenswrapper[4751]: I0130 21:34:50.874481 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:53 crc kubenswrapper[4751]: E0130 21:34:53.005832 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6" Jan 30 21:34:53 crc kubenswrapper[4751]: E0130 21:34:53.006280 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k8fxp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-9vvgb_openstack-operators(4a416a7c-3094-46ef-8370-9cad7446339b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:53 crc kubenswrapper[4751]: E0130 21:34:53.007519 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" podUID="4a416a7c-3094-46ef-8370-9cad7446339b" Jan 30 21:34:53 crc kubenswrapper[4751]: E0130 21:34:53.426818 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 30 21:34:53 crc kubenswrapper[4751]: E0130 21:34:53.427234 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5p4vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-v8vch_openstack-operators(a986231c-2119-4a13-801d-51119db5d365): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:53 crc kubenswrapper[4751]: E0130 21:34:53.429498 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch" podUID="a986231c-2119-4a13-801d-51119db5d365" Jan 30 21:34:53 crc kubenswrapper[4751]: I0130 21:34:53.448888 4751 scope.go:117] "RemoveContainer" containerID="abe94b742d94eb174247c27e3a3c038f4045e5dcdb784f1f247f494e3ae1f48a" Jan 30 21:34:53 crc kubenswrapper[4751]: I0130 21:34:53.631219 4751 scope.go:117] "RemoveContainer" containerID="347f9ed747e483e16fb6ae1c645ea8f9e1e241d75612df7496d92124e040f3b2" Jan 30 21:34:53 crc kubenswrapper[4751]: I0130 21:34:53.713632 4751 scope.go:117] "RemoveContainer" containerID="00077efd881cb27326f6e85b8f3f194fe2c51b7a53178340a6cd81dc7d4c6583" Jan 30 21:34:53 crc kubenswrapper[4751]: I0130 21:34:53.973280 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7mpjw" event={"ID":"236db419-e197-4a85-ab49-58cf38babea6","Type":"ContainerStarted","Data":"5fa5c830bfdd612600389bfea6674a544ce967add5e924a725be5ebb2885679a"} Jan 30 21:34:53 crc kubenswrapper[4751]: I0130 21:34:53.973775 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7mpjw" Jan 30 21:34:53 crc kubenswrapper[4751]: I0130 21:34:53.987480 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7mpjw" podStartSLOduration=15.784407541 podStartE2EDuration="36.987466414s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:19.009041761 +0000 UTC m=+1197.754864410" lastFinishedPulling="2026-01-30 21:34:40.212100614 +0000 UTC m=+1218.957923283" observedRunningTime="2026-01-30 21:34:53.986896389 +0000 UTC m=+1232.732719038" watchObservedRunningTime="2026-01-30 21:34:53.987466414 +0000 UTC m=+1232.733289063" Jan 30 21:34:54 crc kubenswrapper[4751]: I0130 21:34:54.012334 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xk52h" event={"ID":"1ad347ea-d2ce-4a1e-912a-8471445396f7","Type":"ContainerStarted","Data":"2ed1c48780361b0da3291a463bb0b12eaf8d4b6a0628fa06712b48e9297a09e3"} Jan 30 21:34:54 crc kubenswrapper[4751]: I0130 21:34:54.012390 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk" event={"ID":"ace28553-76bc-4472-a671-788e1fb9a1ff","Type":"ContainerStarted","Data":"1b719fe4358ed9389812b21624b67e92bb6f60818e78767305a4e5d49e7097c3"} Jan 30 21:34:54 crc kubenswrapper[4751]: I0130 21:34:54.012583 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk" Jan 30 21:34:54 crc kubenswrapper[4751]: I0130 21:34:54.019556 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xk52h" podStartSLOduration=12.388039844 podStartE2EDuration="37.019533701s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.167129913 +0000 UTC m=+1198.912952562" lastFinishedPulling="2026-01-30 21:34:44.79862377 +0000 UTC m=+1223.544446419" observedRunningTime="2026-01-30 21:34:54.008981969 +0000 UTC m=+1232.754804618" watchObservedRunningTime="2026-01-30 21:34:54.019533701 +0000 UTC m=+1232.765356350" Jan 30 21:34:54 crc kubenswrapper[4751]: I0130 21:34:54.059744 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk" podStartSLOduration=4.401760019 podStartE2EDuration="37.059726244s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.863163234 +0000 UTC m=+1199.608985883" lastFinishedPulling="2026-01-30 21:34:53.521129429 +0000 UTC m=+1232.266952108" observedRunningTime="2026-01-30 21:34:54.056900539 +0000 UTC m=+1232.802723188" watchObservedRunningTime="2026-01-30 21:34:54.059726244 +0000 UTC m=+1232.805548893" Jan 30 21:34:54 crc kubenswrapper[4751]: I0130 21:34:54.126575 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:34:54 crc kubenswrapper[4751]: I0130 21:34:54.126614 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:34:54 crc kubenswrapper[4751]: I0130 21:34:54.160073 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-52vr2"] Jan 30 21:34:54 crc kubenswrapper[4751]: I0130 21:34:54.203795 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk"] Jan 30 21:34:54 crc kubenswrapper[4751]: I0130 21:34:54.379260 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6"] Jan 30 21:34:54 crc kubenswrapper[4751]: W0130 21:34:54.394906 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddac6f1f3_8549_488c_bb63_aa980f4a1282.slice/crio-ff2e1c06d22df6dfdb1a2b6299334dafee22973264e0e7f72550a4ae0320e0d3 WatchSource:0}: Error finding container ff2e1c06d22df6dfdb1a2b6299334dafee22973264e0e7f72550a4ae0320e0d3: Status 404 returned error can't find the container with id ff2e1c06d22df6dfdb1a2b6299334dafee22973264e0e7f72550a4ae0320e0d3 Jan 30 21:34:54 crc kubenswrapper[4751]: I0130 21:34:54.992705 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-gcvgx" event={"ID":"cbae5889-938b-4211-94a6-de960df2f95d","Type":"ContainerStarted","Data":"4b6ca8b0c876caf2fcdd17416cd135560c19ab71fb393357753836ea0497d737"} Jan 30 21:34:54 crc kubenswrapper[4751]: I0130 21:34:54.993242 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-gcvgx" Jan 30 21:34:55 crc kubenswrapper[4751]: I0130 21:34:55.011226 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" event={"ID":"0026e471-8226-4038-8c52-f0add2877c8d","Type":"ContainerStarted","Data":"22bed683c9777fa2ec651abaa8b0249cd6ed9ce02d93cd2508d4bc79adc5c7b1"} Jan 30 21:34:55 crc kubenswrapper[4751]: I0130 21:34:55.012198 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" event={"ID":"dac6f1f3-8549-488c-bb63-aa980f4a1282","Type":"ContainerStarted","Data":"ff2e1c06d22df6dfdb1a2b6299334dafee22973264e0e7f72550a4ae0320e0d3"} Jan 30 21:34:55 crc kubenswrapper[4751]: I0130 21:34:55.012997 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" event={"ID":"2d6f1acc-6416-44ae-9082-3ebe16dce448","Type":"ContainerStarted","Data":"51a27cb253d3d55b9a1e7d0fae5bb0e213d638f3c401b29e1517770777f5c785"} Jan 30 21:34:55 crc kubenswrapper[4751]: I0130 21:34:55.014152 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-b65fl" event={"ID":"0fd5051a-5be4-4336-af86-9674469b76a0","Type":"ContainerStarted","Data":"4a4d622ebabe0baf4bd923e5a4affe3b44810af0cb669bd6e91087a654cca655"} Jan 30 21:34:55 crc kubenswrapper[4751]: I0130 21:34:55.014315 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-b65fl" Jan 30 21:34:55 crc kubenswrapper[4751]: I0130 21:34:55.018820 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-n2shb" event={"ID":"9a88f139-89db-4b3a-8fea-bf951e59f564","Type":"ContainerStarted","Data":"673f6fbf17424d40a3292cb90a3b3659b28fc30d1e56626b39acb557d110443e"} Jan 30 21:34:55 crc kubenswrapper[4751]: I0130 21:34:55.018858 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-n2shb" Jan 30 21:34:55 crc kubenswrapper[4751]: I0130 21:34:55.019202 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xk52h" Jan 30 21:34:55 crc kubenswrapper[4751]: I0130 21:34:55.106951 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-gcvgx" podStartSLOduration=4.441512902 podStartE2EDuration="37.106935966s" podCreationTimestamp="2026-01-30 21:34:18 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.855788087 +0000 UTC m=+1199.601610736" lastFinishedPulling="2026-01-30 21:34:53.521211151 +0000 UTC m=+1232.267033800" observedRunningTime="2026-01-30 21:34:55.077497219 +0000 UTC m=+1233.823319868" watchObservedRunningTime="2026-01-30 21:34:55.106935966 +0000 UTC m=+1233.852758605" Jan 30 21:34:55 crc kubenswrapper[4751]: I0130 21:34:55.138740 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-b65fl" podStartSLOduration=11.810153458 podStartE2EDuration="38.138724894s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:19.864680145 +0000 UTC m=+1198.610502794" lastFinishedPulling="2026-01-30 21:34:46.193251531 +0000 UTC m=+1224.939074230" observedRunningTime="2026-01-30 21:34:55.137712038 +0000 UTC m=+1233.883534687" watchObservedRunningTime="2026-01-30 21:34:55.138724894 +0000 UTC m=+1233.884547543" Jan 30 21:34:55 crc kubenswrapper[4751]: I0130 21:34:55.151063 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-n2shb" podStartSLOduration=12.087689122 podStartE2EDuration="38.151042074s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.129897099 +0000 UTC m=+1198.875719738" lastFinishedPulling="2026-01-30 21:34:46.193250041 +0000 UTC m=+1224.939072690" observedRunningTime="2026-01-30 21:34:55.11457265 +0000 UTC m=+1233.860395299" watchObservedRunningTime="2026-01-30 21:34:55.151042074 +0000 UTC m=+1233.896864723" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.035663 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz" event={"ID":"fcf49997-888f-4e58-99e7-f1f677dc7111","Type":"ContainerStarted","Data":"0065c8a882442cc833b09c16c3f19959d7b9b5ecfd1c352525ce1dac50a8453f"} Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.036253 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.045557 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr" event={"ID":"0b3a96d4-f5fc-47be-9c28-47239b2488c1","Type":"ContainerStarted","Data":"d119097e01f5202f4300c2ff25ad3a798c72ae5270d64d63ff575555b33cae32"} Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.046383 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.051797 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz" podStartSLOduration=4.118363621 podStartE2EDuration="39.051781472s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.771624129 +0000 UTC m=+1199.517446778" lastFinishedPulling="2026-01-30 21:34:55.70504198 +0000 UTC m=+1234.450864629" observedRunningTime="2026-01-30 21:34:56.049551112 +0000 UTC m=+1234.795373761" watchObservedRunningTime="2026-01-30 21:34:56.051781472 +0000 UTC m=+1234.797604121" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.056702 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf" event={"ID":"f8cf0eb3-a93d-4462-b5ac-bbaaebf6daf9","Type":"ContainerStarted","Data":"fbde8d72f1fc39b1f74c2d0bc1c2d5bf96862e1f7b1d06e8d530d9e4198b7b9d"} Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.057048 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.064437 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r" event={"ID":"9003ffe6-59a3-4c7c-96d0-d129a9339247","Type":"ContainerStarted","Data":"13a8109b51a4c5e715db5d09987428acf7c9346c4a030fa77527c6ac71340c9b"} Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.064643 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.069776 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr" podStartSLOduration=5.484966544 podStartE2EDuration="39.069764603s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.167907584 +0000 UTC m=+1198.913730233" lastFinishedPulling="2026-01-30 21:34:53.752705643 +0000 UTC m=+1232.498528292" observedRunningTime="2026-01-30 21:34:56.064942724 +0000 UTC m=+1234.810765393" watchObservedRunningTime="2026-01-30 21:34:56.069764603 +0000 UTC m=+1234.815587252" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.071785 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" event={"ID":"dac6f1f3-8549-488c-bb63-aa980f4a1282","Type":"ContainerStarted","Data":"62c7020bf65aa30bbe0811245bd0fca8a2ba912af669ab81357dc420c6fb354c"} Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.071957 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.073393 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq" event={"ID":"3d59cc79-1a37-434a-a04b-156739f469d7","Type":"ContainerStarted","Data":"562b62c113879705894b59c893522e12cf1dcd15b6b9b096b87d13907e1a9f19"} Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.073552 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.074623 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v" event={"ID":"694b29bc-994c-4983-81c7-b32d47db553b","Type":"ContainerStarted","Data":"98639b3d14f774696a05d9d16f0133b57465a837d6405d368795058b7e1a3ca7"} Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.074748 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.078836 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf" event={"ID":"3fae5204-d3a1-4e39-ac3d-d28c8a55c7db","Type":"ContainerStarted","Data":"eeecfd4751c457ab23201cb5d49e83c31bfd47168c8de8e1e03dbb30ad11097e"} Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.079265 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.086297 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf" podStartSLOduration=5.148898675 podStartE2EDuration="39.086277603s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:19.833484101 +0000 UTC m=+1198.579306750" lastFinishedPulling="2026-01-30 21:34:53.770863029 +0000 UTC m=+1232.516685678" observedRunningTime="2026-01-30 21:34:56.075380193 +0000 UTC m=+1234.821202842" watchObservedRunningTime="2026-01-30 21:34:56.086277603 +0000 UTC m=+1234.832100252" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.119092 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r" podStartSLOduration=5.22692729 podStartE2EDuration="39.119073959s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:19.860572555 +0000 UTC m=+1198.606395204" lastFinishedPulling="2026-01-30 21:34:53.752719224 +0000 UTC m=+1232.498541873" observedRunningTime="2026-01-30 21:34:56.102142357 +0000 UTC m=+1234.847965006" watchObservedRunningTime="2026-01-30 21:34:56.119073959 +0000 UTC m=+1234.864896608" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.129412 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf" podStartSLOduration=5.23742279 podStartE2EDuration="39.129393395s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:19.860814101 +0000 UTC m=+1198.606636750" lastFinishedPulling="2026-01-30 21:34:53.752784706 +0000 UTC m=+1232.498607355" observedRunningTime="2026-01-30 21:34:56.129381515 +0000 UTC m=+1234.875204164" watchObservedRunningTime="2026-01-30 21:34:56.129393395 +0000 UTC m=+1234.875216074" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.154426 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v" podStartSLOduration=5.58441943 podStartE2EDuration="39.154408484s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.161239846 +0000 UTC m=+1198.907062495" lastFinishedPulling="2026-01-30 21:34:53.7312289 +0000 UTC m=+1232.477051549" observedRunningTime="2026-01-30 21:34:56.149545843 +0000 UTC m=+1234.895368492" watchObservedRunningTime="2026-01-30 21:34:56.154408484 +0000 UTC m=+1234.900231133" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.178577 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq" podStartSLOduration=3.600743625 podStartE2EDuration="38.178559718s" podCreationTimestamp="2026-01-30 21:34:18 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.798740204 +0000 UTC m=+1199.544562853" lastFinishedPulling="2026-01-30 21:34:55.376556307 +0000 UTC m=+1234.122378946" observedRunningTime="2026-01-30 21:34:56.171389987 +0000 UTC m=+1234.917212646" watchObservedRunningTime="2026-01-30 21:34:56.178559718 +0000 UTC m=+1234.924382357" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.230808 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" podStartSLOduration=38.230793354 podStartE2EDuration="38.230793354s" podCreationTimestamp="2026-01-30 21:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:34:56.227787273 +0000 UTC m=+1234.973609922" watchObservedRunningTime="2026-01-30 21:34:56.230793354 +0000 UTC m=+1234.976616003" Jan 30 21:34:57 crc kubenswrapper[4751]: I0130 21:34:57.113928 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn" event={"ID":"0c86abfd-77a9-4388-8b7f-b61bb378f7cb","Type":"ContainerStarted","Data":"25c64994eab3654900b8462a41b5e82659584d507d33a0169bf1cf58a8c1563b"} Jan 30 21:34:57 crc kubenswrapper[4751]: I0130 21:34:57.114352 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn" Jan 30 21:34:57 crc kubenswrapper[4751]: I0130 21:34:57.127951 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6" event={"ID":"c711cf07-a695-447a-8d01-147b10e9059f","Type":"ContainerStarted","Data":"65ce6678db5c348ded66e26df03e88cf5cc9ea44e66f2bea0f2dd7ed64914148"} Jan 30 21:34:57 crc kubenswrapper[4751]: I0130 21:34:57.129740 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6" Jan 30 21:34:57 crc kubenswrapper[4751]: I0130 21:34:57.137477 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn" podStartSLOduration=4.541451722 podStartE2EDuration="40.137455261s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.799273658 +0000 UTC m=+1199.545096307" lastFinishedPulling="2026-01-30 21:34:56.395277187 +0000 UTC m=+1235.141099846" observedRunningTime="2026-01-30 21:34:57.130643968 +0000 UTC m=+1235.876466617" watchObservedRunningTime="2026-01-30 21:34:57.137455261 +0000 UTC m=+1235.883277910" Jan 30 21:34:57 crc kubenswrapper[4751]: I0130 21:34:57.154841 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6" podStartSLOduration=4.4945907609999995 podStartE2EDuration="40.154822075s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.842943455 +0000 UTC m=+1199.588766104" lastFinishedPulling="2026-01-30 21:34:56.503174769 +0000 UTC m=+1235.248997418" observedRunningTime="2026-01-30 21:34:57.148701771 +0000 UTC m=+1235.894524440" watchObservedRunningTime="2026-01-30 21:34:57.154822075 +0000 UTC m=+1235.900644724" Jan 30 21:34:58 crc kubenswrapper[4751]: I0130 21:34:58.370572 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xk52h" Jan 30 21:34:58 crc kubenswrapper[4751]: I0130 21:34:58.613380 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk" Jan 30 21:35:00 crc kubenswrapper[4751]: I0130 21:35:00.153136 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv" event={"ID":"b2777bff-2cca-4f41-8655-a737f13b4885","Type":"ContainerStarted","Data":"a5b059559b5215a322f41f19ccd90fde5e3bbbaf7eb96502655a3f4d2bccc703"} Jan 30 21:35:00 crc kubenswrapper[4751]: I0130 21:35:00.154302 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv" Jan 30 21:35:00 crc kubenswrapper[4751]: I0130 21:35:00.156086 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" event={"ID":"0026e471-8226-4038-8c52-f0add2877c8d","Type":"ContainerStarted","Data":"36f6150b9b0b284192d5acf270cbd8ea3642994d684f14b41e618315a802d1c9"} Jan 30 21:35:00 crc kubenswrapper[4751]: I0130 21:35:00.156532 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:35:00 crc kubenswrapper[4751]: I0130 21:35:00.158547 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n" event={"ID":"e596dcc9-7f31-4312-99e3-7d86d318ef9d","Type":"ContainerStarted","Data":"6a1379304ff76647533441df6e1b27eb60488c7403124b9c15cc9b521c00747e"} Jan 30 21:35:00 crc kubenswrapper[4751]: I0130 21:35:00.159076 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n" Jan 30 21:35:00 crc kubenswrapper[4751]: I0130 21:35:00.161020 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" event={"ID":"2d6f1acc-6416-44ae-9082-3ebe16dce448","Type":"ContainerStarted","Data":"0079251a734d5d58daf7ed7163417339edbacaef2022d9e5c0ed0507c4570a06"} Jan 30 21:35:00 crc kubenswrapper[4751]: I0130 21:35:00.161443 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:35:00 crc kubenswrapper[4751]: I0130 21:35:00.169051 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv" podStartSLOduration=3.8233985820000003 podStartE2EDuration="43.169033564s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.164947355 +0000 UTC m=+1198.910770004" lastFinishedPulling="2026-01-30 21:34:59.510582347 +0000 UTC m=+1238.256404986" observedRunningTime="2026-01-30 21:35:00.167868472 +0000 UTC m=+1238.913691131" watchObservedRunningTime="2026-01-30 21:35:00.169033564 +0000 UTC m=+1238.914856213" Jan 30 21:35:00 crc kubenswrapper[4751]: I0130 21:35:00.232936 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" podStartSLOduration=38.060495115 podStartE2EDuration="43.232914521s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:54.186977783 +0000 UTC m=+1232.932800432" lastFinishedPulling="2026-01-30 21:34:59.359397189 +0000 UTC m=+1238.105219838" observedRunningTime="2026-01-30 21:35:00.210480151 +0000 UTC m=+1238.956302800" watchObservedRunningTime="2026-01-30 21:35:00.232914521 +0000 UTC m=+1238.978737170" Jan 30 21:35:00 crc kubenswrapper[4751]: I0130 21:35:00.240271 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" podStartSLOduration=38.081949798 podStartE2EDuration="43.240248276s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:54.198686786 +0000 UTC m=+1232.944509435" lastFinishedPulling="2026-01-30 21:34:59.356985264 +0000 UTC m=+1238.102807913" observedRunningTime="2026-01-30 21:35:00.231363749 +0000 UTC m=+1238.977186408" watchObservedRunningTime="2026-01-30 21:35:00.240248276 +0000 UTC m=+1238.986070935" Jan 30 21:35:00 crc kubenswrapper[4751]: I0130 21:35:00.251544 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n" podStartSLOduration=4.057083084 podStartE2EDuration="43.251521807s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.163792544 +0000 UTC m=+1198.909615193" lastFinishedPulling="2026-01-30 21:34:59.358231277 +0000 UTC m=+1238.104053916" observedRunningTime="2026-01-30 21:35:00.25086545 +0000 UTC m=+1238.996688099" watchObservedRunningTime="2026-01-30 21:35:00.251521807 +0000 UTC m=+1238.997344456" Jan 30 21:35:00 crc kubenswrapper[4751]: I0130 21:35:00.896249 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:35:01 crc kubenswrapper[4751]: I0130 21:35:01.168527 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr" event={"ID":"3b9cc057-30d7-4a03-8c76-a1ca7200dbae","Type":"ContainerStarted","Data":"4f33d3927639f134700295e7f1922aa32953c741e708de259922e5cf89fe6a65"} Jan 30 21:35:01 crc kubenswrapper[4751]: I0130 21:35:01.169167 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr" Jan 30 21:35:01 crc kubenswrapper[4751]: I0130 21:35:01.192820 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr" podStartSLOduration=4.017583158 podStartE2EDuration="43.192792638s" podCreationTimestamp="2026-01-30 21:34:18 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.851744569 +0000 UTC m=+1199.597567218" lastFinishedPulling="2026-01-30 21:35:00.026954049 +0000 UTC m=+1238.772776698" observedRunningTime="2026-01-30 21:35:01.186039608 +0000 UTC m=+1239.931862257" watchObservedRunningTime="2026-01-30 21:35:01.192792638 +0000 UTC m=+1239.938615307" Jan 30 21:35:03 crc kubenswrapper[4751]: E0130 21:35:03.977478 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" podUID="4a416a7c-3094-46ef-8370-9cad7446339b" Jan 30 21:35:07 crc kubenswrapper[4751]: I0130 21:35:07.932865 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7mpjw" Jan 30 21:35:07 crc kubenswrapper[4751]: I0130 21:35:07.951270 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r" Jan 30 21:35:07 crc kubenswrapper[4751]: I0130 21:35:07.969705 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf" Jan 30 21:35:07 crc kubenswrapper[4751]: E0130 21:35:07.977844 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch" podUID="a986231c-2119-4a13-801d-51119db5d365" Jan 30 21:35:08 crc kubenswrapper[4751]: I0130 21:35:08.014254 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-b65fl" Jan 30 21:35:08 crc kubenswrapper[4751]: I0130 21:35:08.042554 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf" Jan 30 21:35:08 crc kubenswrapper[4751]: I0130 21:35:08.119032 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr" Jan 30 21:35:08 crc kubenswrapper[4751]: I0130 21:35:08.123757 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-n2shb" Jan 30 21:35:08 crc kubenswrapper[4751]: I0130 21:35:08.331030 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv" Jan 30 21:35:08 crc kubenswrapper[4751]: I0130 21:35:08.370964 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v" Jan 30 21:35:08 crc kubenswrapper[4751]: I0130 21:35:08.411008 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n" Jan 30 21:35:08 crc kubenswrapper[4751]: I0130 21:35:08.456045 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz" Jan 30 21:35:08 crc kubenswrapper[4751]: I0130 21:35:08.593104 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6" Jan 30 21:35:08 crc kubenswrapper[4751]: I0130 21:35:08.640963 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn" Jan 30 21:35:08 crc kubenswrapper[4751]: I0130 21:35:08.667429 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr" Jan 30 21:35:08 crc kubenswrapper[4751]: I0130 21:35:08.922254 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq" Jan 30 21:35:08 crc kubenswrapper[4751]: I0130 21:35:08.984097 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-gcvgx" Jan 30 21:35:10 crc kubenswrapper[4751]: I0130 21:35:10.044577 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:35:10 crc kubenswrapper[4751]: I0130 21:35:10.282576 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:35:19 crc kubenswrapper[4751]: I0130 21:35:19.357893 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" event={"ID":"4a416a7c-3094-46ef-8370-9cad7446339b","Type":"ContainerStarted","Data":"d4abf5f03de72ca9b0b1149ac692919f1fe1c16e4e240c5e153ff2b647dcc260"} Jan 30 21:35:19 crc kubenswrapper[4751]: I0130 21:35:19.358753 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" Jan 30 21:35:19 crc kubenswrapper[4751]: I0130 21:35:19.375409 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" podStartSLOduration=4.803639255 podStartE2EDuration="1m2.375391126s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.887185796 +0000 UTC m=+1199.633008445" lastFinishedPulling="2026-01-30 21:35:18.458937647 +0000 UTC m=+1257.204760316" observedRunningTime="2026-01-30 21:35:19.372069107 +0000 UTC m=+1258.117891826" watchObservedRunningTime="2026-01-30 21:35:19.375391126 +0000 UTC m=+1258.121213775" Jan 30 21:35:21 crc kubenswrapper[4751]: I0130 21:35:21.383245 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch" event={"ID":"a986231c-2119-4a13-801d-51119db5d365","Type":"ContainerStarted","Data":"f1ebf2d10c1dabf042805319fd6922fa6222ef7dee3ae7cb0cab17ccb2932acc"} Jan 30 21:35:21 crc kubenswrapper[4751]: I0130 21:35:21.415218 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch" podStartSLOduration=3.831972231 podStartE2EDuration="1m3.415183968s" podCreationTimestamp="2026-01-30 21:34:18 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.853225229 +0000 UTC m=+1199.599047878" lastFinishedPulling="2026-01-30 21:35:20.436436966 +0000 UTC m=+1259.182259615" observedRunningTime="2026-01-30 21:35:21.406398794 +0000 UTC m=+1260.152221463" watchObservedRunningTime="2026-01-30 21:35:21.415183968 +0000 UTC m=+1260.161006657" Jan 30 21:35:24 crc kubenswrapper[4751]: I0130 21:35:24.127237 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:35:24 crc kubenswrapper[4751]: I0130 21:35:24.128120 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:35:28 crc kubenswrapper[4751]: I0130 21:35:28.379544 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" Jan 30 21:35:45 crc kubenswrapper[4751]: I0130 21:35:45.909914 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8tkr2"] Jan 30 21:35:45 crc kubenswrapper[4751]: I0130 21:35:45.911802 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8tkr2" Jan 30 21:35:45 crc kubenswrapper[4751]: I0130 21:35:45.922599 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 30 21:35:45 crc kubenswrapper[4751]: I0130 21:35:45.922730 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 30 21:35:45 crc kubenswrapper[4751]: I0130 21:35:45.922849 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 30 21:35:45 crc kubenswrapper[4751]: I0130 21:35:45.922888 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-m7mgl" Jan 30 21:35:45 crc kubenswrapper[4751]: I0130 21:35:45.929875 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8tkr2"] Jan 30 21:35:45 crc kubenswrapper[4751]: I0130 21:35:45.997063 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5w7xw"] Jan 30 21:35:45 crc kubenswrapper[4751]: I0130 21:35:45.998836 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.001899 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.022383 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5w7xw"] Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.038086 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a28e93be-b42f-4075-9092-349b11c825bb-config\") pod \"dnsmasq-dns-675f4bcbfc-8tkr2\" (UID: \"a28e93be-b42f-4075-9092-349b11c825bb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8tkr2" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.038260 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmqld\" (UniqueName: \"kubernetes.io/projected/a28e93be-b42f-4075-9092-349b11c825bb-kube-api-access-pmqld\") pod \"dnsmasq-dns-675f4bcbfc-8tkr2\" (UID: \"a28e93be-b42f-4075-9092-349b11c825bb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8tkr2" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.140380 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpc4c\" (UniqueName: \"kubernetes.io/projected/ef9f34c1-a280-43a3-a78b-6a10c2972759-kube-api-access-bpc4c\") pod \"dnsmasq-dns-78dd6ddcc-5w7xw\" (UID: \"ef9f34c1-a280-43a3-a78b-6a10c2972759\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.140570 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef9f34c1-a280-43a3-a78b-6a10c2972759-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-5w7xw\" (UID: \"ef9f34c1-a280-43a3-a78b-6a10c2972759\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.140602 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a28e93be-b42f-4075-9092-349b11c825bb-config\") pod \"dnsmasq-dns-675f4bcbfc-8tkr2\" (UID: \"a28e93be-b42f-4075-9092-349b11c825bb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8tkr2" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.140651 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef9f34c1-a280-43a3-a78b-6a10c2972759-config\") pod \"dnsmasq-dns-78dd6ddcc-5w7xw\" (UID: \"ef9f34c1-a280-43a3-a78b-6a10c2972759\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.140729 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmqld\" (UniqueName: \"kubernetes.io/projected/a28e93be-b42f-4075-9092-349b11c825bb-kube-api-access-pmqld\") pod \"dnsmasq-dns-675f4bcbfc-8tkr2\" (UID: \"a28e93be-b42f-4075-9092-349b11c825bb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8tkr2" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.142630 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a28e93be-b42f-4075-9092-349b11c825bb-config\") pod \"dnsmasq-dns-675f4bcbfc-8tkr2\" (UID: \"a28e93be-b42f-4075-9092-349b11c825bb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8tkr2" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.164482 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmqld\" (UniqueName: \"kubernetes.io/projected/a28e93be-b42f-4075-9092-349b11c825bb-kube-api-access-pmqld\") pod \"dnsmasq-dns-675f4bcbfc-8tkr2\" (UID: \"a28e93be-b42f-4075-9092-349b11c825bb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8tkr2" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.242129 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef9f34c1-a280-43a3-a78b-6a10c2972759-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-5w7xw\" (UID: \"ef9f34c1-a280-43a3-a78b-6a10c2972759\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.242189 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef9f34c1-a280-43a3-a78b-6a10c2972759-config\") pod \"dnsmasq-dns-78dd6ddcc-5w7xw\" (UID: \"ef9f34c1-a280-43a3-a78b-6a10c2972759\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.242252 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpc4c\" (UniqueName: \"kubernetes.io/projected/ef9f34c1-a280-43a3-a78b-6a10c2972759-kube-api-access-bpc4c\") pod \"dnsmasq-dns-78dd6ddcc-5w7xw\" (UID: \"ef9f34c1-a280-43a3-a78b-6a10c2972759\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.243109 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef9f34c1-a280-43a3-a78b-6a10c2972759-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-5w7xw\" (UID: \"ef9f34c1-a280-43a3-a78b-6a10c2972759\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.243158 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef9f34c1-a280-43a3-a78b-6a10c2972759-config\") pod \"dnsmasq-dns-78dd6ddcc-5w7xw\" (UID: \"ef9f34c1-a280-43a3-a78b-6a10c2972759\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.258983 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8tkr2" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.260399 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpc4c\" (UniqueName: \"kubernetes.io/projected/ef9f34c1-a280-43a3-a78b-6a10c2972759-kube-api-access-bpc4c\") pod \"dnsmasq-dns-78dd6ddcc-5w7xw\" (UID: \"ef9f34c1-a280-43a3-a78b-6a10c2972759\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.325727 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.729800 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8tkr2"] Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.847248 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5w7xw"] Jan 30 21:35:47 crc kubenswrapper[4751]: I0130 21:35:47.460276 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" event={"ID":"ef9f34c1-a280-43a3-a78b-6a10c2972759","Type":"ContainerStarted","Data":"db81d7e720a3f967946b93c7cdb416134679dd7e6418e2fc62f067e92c234fe4"} Jan 30 21:35:47 crc kubenswrapper[4751]: I0130 21:35:47.462006 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8tkr2" event={"ID":"a28e93be-b42f-4075-9092-349b11c825bb","Type":"ContainerStarted","Data":"93642ef2b0df57668df0e7cd91eb49b59436760eab9b26ed0b040d59521f1d2c"} Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.508232 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8tkr2"] Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.537845 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-whdw4"] Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.539440 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-whdw4" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.569243 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-whdw4"] Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.689783 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvr5f\" (UniqueName: \"kubernetes.io/projected/d3d45f11-44b0-4b38-b308-c99c83e52e6b-kube-api-access-gvr5f\") pod \"dnsmasq-dns-666b6646f7-whdw4\" (UID: \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\") " pod="openstack/dnsmasq-dns-666b6646f7-whdw4" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.689867 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3d45f11-44b0-4b38-b308-c99c83e52e6b-dns-svc\") pod \"dnsmasq-dns-666b6646f7-whdw4\" (UID: \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\") " pod="openstack/dnsmasq-dns-666b6646f7-whdw4" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.690004 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d45f11-44b0-4b38-b308-c99c83e52e6b-config\") pod \"dnsmasq-dns-666b6646f7-whdw4\" (UID: \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\") " pod="openstack/dnsmasq-dns-666b6646f7-whdw4" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.791682 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3d45f11-44b0-4b38-b308-c99c83e52e6b-dns-svc\") pod \"dnsmasq-dns-666b6646f7-whdw4\" (UID: \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\") " pod="openstack/dnsmasq-dns-666b6646f7-whdw4" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.792041 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d45f11-44b0-4b38-b308-c99c83e52e6b-config\") pod \"dnsmasq-dns-666b6646f7-whdw4\" (UID: \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\") " pod="openstack/dnsmasq-dns-666b6646f7-whdw4" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.792176 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvr5f\" (UniqueName: \"kubernetes.io/projected/d3d45f11-44b0-4b38-b308-c99c83e52e6b-kube-api-access-gvr5f\") pod \"dnsmasq-dns-666b6646f7-whdw4\" (UID: \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\") " pod="openstack/dnsmasq-dns-666b6646f7-whdw4" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.793353 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3d45f11-44b0-4b38-b308-c99c83e52e6b-dns-svc\") pod \"dnsmasq-dns-666b6646f7-whdw4\" (UID: \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\") " pod="openstack/dnsmasq-dns-666b6646f7-whdw4" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.793366 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d45f11-44b0-4b38-b308-c99c83e52e6b-config\") pod \"dnsmasq-dns-666b6646f7-whdw4\" (UID: \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\") " pod="openstack/dnsmasq-dns-666b6646f7-whdw4" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.812757 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvr5f\" (UniqueName: \"kubernetes.io/projected/d3d45f11-44b0-4b38-b308-c99c83e52e6b-kube-api-access-gvr5f\") pod \"dnsmasq-dns-666b6646f7-whdw4\" (UID: \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\") " pod="openstack/dnsmasq-dns-666b6646f7-whdw4" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.822738 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5w7xw"] Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.854472 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-628lt"] Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.856569 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-628lt" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.860300 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-whdw4" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.872304 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-628lt"] Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.997106 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgm5m\" (UniqueName: \"kubernetes.io/projected/33135688-6f3e-426e-be2b-0e455d6736e6-kube-api-access-fgm5m\") pod \"dnsmasq-dns-57d769cc4f-628lt\" (UID: \"33135688-6f3e-426e-be2b-0e455d6736e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-628lt" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.997169 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33135688-6f3e-426e-be2b-0e455d6736e6-config\") pod \"dnsmasq-dns-57d769cc4f-628lt\" (UID: \"33135688-6f3e-426e-be2b-0e455d6736e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-628lt" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.997255 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33135688-6f3e-426e-be2b-0e455d6736e6-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-628lt\" (UID: \"33135688-6f3e-426e-be2b-0e455d6736e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-628lt" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.098629 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgm5m\" (UniqueName: \"kubernetes.io/projected/33135688-6f3e-426e-be2b-0e455d6736e6-kube-api-access-fgm5m\") pod \"dnsmasq-dns-57d769cc4f-628lt\" (UID: \"33135688-6f3e-426e-be2b-0e455d6736e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-628lt" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.098686 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33135688-6f3e-426e-be2b-0e455d6736e6-config\") pod \"dnsmasq-dns-57d769cc4f-628lt\" (UID: \"33135688-6f3e-426e-be2b-0e455d6736e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-628lt" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.098787 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33135688-6f3e-426e-be2b-0e455d6736e6-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-628lt\" (UID: \"33135688-6f3e-426e-be2b-0e455d6736e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-628lt" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.100340 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33135688-6f3e-426e-be2b-0e455d6736e6-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-628lt\" (UID: \"33135688-6f3e-426e-be2b-0e455d6736e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-628lt" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.100402 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33135688-6f3e-426e-be2b-0e455d6736e6-config\") pod \"dnsmasq-dns-57d769cc4f-628lt\" (UID: \"33135688-6f3e-426e-be2b-0e455d6736e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-628lt" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.120319 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgm5m\" (UniqueName: \"kubernetes.io/projected/33135688-6f3e-426e-be2b-0e455d6736e6-kube-api-access-fgm5m\") pod \"dnsmasq-dns-57d769cc4f-628lt\" (UID: \"33135688-6f3e-426e-be2b-0e455d6736e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-628lt" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.200665 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-628lt" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.490119 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-whdw4"] Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.669274 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-628lt"] Jan 30 21:35:49 crc kubenswrapper[4751]: W0130 21:35:49.682911 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33135688_6f3e_426e_be2b_0e455d6736e6.slice/crio-5b26c9f9622d5f37dabd6fb741797aade48188fd9c3b092f168e67b8d44a96db WatchSource:0}: Error finding container 5b26c9f9622d5f37dabd6fb741797aade48188fd9c3b092f168e67b8d44a96db: Status 404 returned error can't find the container with id 5b26c9f9622d5f37dabd6fb741797aade48188fd9c3b092f168e67b8d44a96db Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.701197 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.703727 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.709264 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.710447 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.710570 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.712054 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-qdzcx" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.712240 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.712370 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.715521 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.761498 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.767825 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.769582 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.774546 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.776808 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.783712 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.794538 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.823641 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-config-data\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.823987 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.824111 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.824187 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f18b5d57-5b05-4ef0-bae3-68938e094510-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.824252 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.824378 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.824763 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f18b5d57-5b05-4ef0-bae3-68938e094510-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.824793 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.824815 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rt94\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-kube-api-access-8rt94\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.824840 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.824869 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.926986 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-config-data\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927032 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-config-data\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927054 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlq5r\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-kube-api-access-nlq5r\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927083 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/192a5913-0c28-4214-9ac0-d37ca2eeb34c-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927099 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ed6288f-1f28-4189-a452-10ed3fa78c7f-pod-info\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927124 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927139 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927257 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927306 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-config-data\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927367 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927401 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ed6288f-1f28-4189-a452-10ed3fa78c7f-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927425 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-server-conf\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927539 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927584 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/192a5913-0c28-4214-9ac0-d37ca2eeb34c-pod-info\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927619 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927695 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927731 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927754 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927837 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f18b5d57-5b05-4ef0-bae3-68938e094510-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927904 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927978 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-server-conf\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927999 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-846ec118-ed9e-4829-80fb-53a6edccba77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-846ec118-ed9e-4829-80fb-53a6edccba77\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.928220 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-config-data\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.928243 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.928256 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.928416 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f18b5d57-5b05-4ef0-bae3-68938e094510-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.928444 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rt94\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-kube-api-access-8rt94\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.928464 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.928490 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.928521 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.928551 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.928572 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.928615 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.928637 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqvcx\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-kube-api-access-zqvcx\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.928670 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.929128 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.930650 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.931149 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.932795 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f18b5d57-5b05-4ef0-bae3-68938e094510-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.933230 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.936060 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.936082 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d4befba5b3452f215320be9365c178860d706182c1f41ab25a94828e6255d8c2/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.936824 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.944780 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rt94\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-kube-api-access-8rt94\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.950961 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f18b5d57-5b05-4ef0-bae3-68938e094510-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.975486 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:49.997893 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:49.999639 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.001812 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.002152 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qvp6f" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.002243 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.002311 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.002509 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.002652 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.003233 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.009807 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.028829 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.029773 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ed6288f-1f28-4189-a452-10ed3fa78c7f-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.029806 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-server-conf\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.029857 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.029876 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/192a5913-0c28-4214-9ac0-d37ca2eeb34c-pod-info\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.029894 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.029914 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.029950 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030008 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-server-conf\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030027 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-846ec118-ed9e-4829-80fb-53a6edccba77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-846ec118-ed9e-4829-80fb-53a6edccba77\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030094 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030112 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030135 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030149 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqvcx\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-kube-api-access-zqvcx\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030176 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030200 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-config-data\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030219 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlq5r\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-kube-api-access-nlq5r\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030266 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/192a5913-0c28-4214-9ac0-d37ca2eeb34c-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030282 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ed6288f-1f28-4189-a452-10ed3fa78c7f-pod-info\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030310 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030342 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030389 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-config-data\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030391 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030413 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.031480 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.033598 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-server-conf\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.034982 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/192a5913-0c28-4214-9ac0-d37ca2eeb34c-pod-info\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.036393 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.036777 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-config-data\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.037648 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.038723 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.039192 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.039743 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-config-data\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.042892 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.043281 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-server-conf\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.044786 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.044817 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/38819e4ab89b59440f000d1a076c7489b3d13c82621db763cbf8d17a6b6689f4/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.047376 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.047436 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-846ec118-ed9e-4829-80fb-53a6edccba77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-846ec118-ed9e-4829-80fb-53a6edccba77\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2001e391e04ee7d0edfbd20e4205f1b60c57288335d512357b8e0f2ce2f191a2/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.057785 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/192a5913-0c28-4214-9ac0-d37ca2eeb34c-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.057899 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ed6288f-1f28-4189-a452-10ed3fa78c7f-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.058376 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.059193 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.060627 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ed6288f-1f28-4189-a452-10ed3fa78c7f-pod-info\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.064625 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlq5r\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-kube-api-access-nlq5r\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.066085 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqvcx\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-kube-api-access-zqvcx\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.068437 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.087788 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.099345 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-846ec118-ed9e-4829-80fb-53a6edccba77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-846ec118-ed9e-4829-80fb-53a6edccba77\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.136253 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.136748 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.137012 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/61d75daf-41cb-4ab5-b849-c98080ca748b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.137111 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.137155 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.137199 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.137287 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.137456 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/61d75daf-41cb-4ab5-b849-c98080ca748b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.137481 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hw2d\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-kube-api-access-8hw2d\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.137546 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.137624 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.239532 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.239601 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.239675 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/61d75daf-41cb-4ab5-b849-c98080ca748b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.239700 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hw2d\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-kube-api-access-8hw2d\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.239738 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.239773 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.239821 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.239869 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.239912 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/61d75daf-41cb-4ab5-b849-c98080ca748b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.239950 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.239977 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.240574 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.241688 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.244089 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.244771 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.245128 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.247539 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.247574 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ea75936dffe846fa8fe6e7d04e4555ffbed93863b04fcd828432921ea88ef24a/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.248656 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/61d75daf-41cb-4ab5-b849-c98080ca748b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.249886 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.250306 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.257674 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/61d75daf-41cb-4ab5-b849-c98080ca748b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.282564 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hw2d\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-kube-api-access-8hw2d\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.349305 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.388072 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.403145 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.427620 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.508873 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-whdw4" event={"ID":"d3d45f11-44b0-4b38-b308-c99c83e52e6b","Type":"ContainerStarted","Data":"3071dbc640f12657ce923f3e1023fb8d61a64a9e5353065a4040dc6a73df2531"} Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.515667 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-628lt" event={"ID":"33135688-6f3e-426e-be2b-0e455d6736e6","Type":"ContainerStarted","Data":"5b26c9f9622d5f37dabd6fb741797aade48188fd9c3b092f168e67b8d44a96db"} Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.608026 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.051175 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.092890 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 30 21:35:51 crc kubenswrapper[4751]: W0130 21:35:51.149670 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod192a5913_0c28_4214_9ac0_d37ca2eeb34c.slice/crio-e8cf5c49c1669ca82eb54f5065d6c41864e69f1712fef33128cb50eb2e139821 WatchSource:0}: Error finding container e8cf5c49c1669ca82eb54f5065d6c41864e69f1712fef33128cb50eb2e139821: Status 404 returned error can't find the container with id e8cf5c49c1669ca82eb54f5065d6c41864e69f1712fef33128cb50eb2e139821 Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.210702 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.408605 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.411404 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.420463 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.429399 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-4mml2" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.429584 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.429824 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.430886 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.449769 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.494970 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d55cd7e5-6799-4e1a-9f3b-a92937aca796-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.495038 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d55cd7e5-6799-4e1a-9f3b-a92937aca796-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.495073 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d55cd7e5-6799-4e1a-9f3b-a92937aca796-config-data-default\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.495089 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d55cd7e5-6799-4e1a-9f3b-a92937aca796-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.495143 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d55cd7e5-6799-4e1a-9f3b-a92937aca796-kolla-config\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.495165 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94k2l\" (UniqueName: \"kubernetes.io/projected/d55cd7e5-6799-4e1a-9f3b-a92937aca796-kube-api-access-94k2l\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.495205 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9cb78143-74df-4a95-9869-ac578dee880c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9cb78143-74df-4a95-9869-ac578dee880c\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.495231 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d55cd7e5-6799-4e1a-9f3b-a92937aca796-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.534469 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f18b5d57-5b05-4ef0-bae3-68938e094510","Type":"ContainerStarted","Data":"a7dc563e23807f6efe79faed84ec9c2b00f86190217519d5f3838b56a30401b8"} Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.537957 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"2ed6288f-1f28-4189-a452-10ed3fa78c7f","Type":"ContainerStarted","Data":"14b244ff165ab8225e2f7204427c69fbfcfd61b1331f0eb3d778a03cddbe88d2"} Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.539503 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"192a5913-0c28-4214-9ac0-d37ca2eeb34c","Type":"ContainerStarted","Data":"e8cf5c49c1669ca82eb54f5065d6c41864e69f1712fef33128cb50eb2e139821"} Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.542312 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"61d75daf-41cb-4ab5-b849-c98080ca748b","Type":"ContainerStarted","Data":"f4f06c01fc35225b23f5f598399e00ef90da1d1a2d96b3cf839a507f64a8e8e3"} Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.598238 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d55cd7e5-6799-4e1a-9f3b-a92937aca796-config-data-default\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.598282 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d55cd7e5-6799-4e1a-9f3b-a92937aca796-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.598350 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d55cd7e5-6799-4e1a-9f3b-a92937aca796-kolla-config\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.598373 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94k2l\" (UniqueName: \"kubernetes.io/projected/d55cd7e5-6799-4e1a-9f3b-a92937aca796-kube-api-access-94k2l\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.598413 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9cb78143-74df-4a95-9869-ac578dee880c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9cb78143-74df-4a95-9869-ac578dee880c\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.598437 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d55cd7e5-6799-4e1a-9f3b-a92937aca796-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.598473 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d55cd7e5-6799-4e1a-9f3b-a92937aca796-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.598505 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d55cd7e5-6799-4e1a-9f3b-a92937aca796-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.598905 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d55cd7e5-6799-4e1a-9f3b-a92937aca796-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.599608 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d55cd7e5-6799-4e1a-9f3b-a92937aca796-config-data-default\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.601221 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d55cd7e5-6799-4e1a-9f3b-a92937aca796-kolla-config\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.609340 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d55cd7e5-6799-4e1a-9f3b-a92937aca796-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.614495 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d55cd7e5-6799-4e1a-9f3b-a92937aca796-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.614822 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.614849 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9cb78143-74df-4a95-9869-ac578dee880c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9cb78143-74df-4a95-9869-ac578dee880c\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1693b5ff48275fd8237f04db183f9f1c5204f43a559a540307ddb2f7e8d8c98a/globalmount\"" pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.629662 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94k2l\" (UniqueName: \"kubernetes.io/projected/d55cd7e5-6799-4e1a-9f3b-a92937aca796-kube-api-access-94k2l\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.646833 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d55cd7e5-6799-4e1a-9f3b-a92937aca796-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.663758 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9cb78143-74df-4a95-9869-ac578dee880c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9cb78143-74df-4a95-9869-ac578dee880c\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.761989 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.452533 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.744847 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.747925 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.749720 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-sfwqq" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.750002 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.752103 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.752704 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.771097 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.826190 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.826265 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.826317 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.826349 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.826382 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.826404 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d42993b1-8455-4142-bdec-aeabe96436cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d42993b1-8455-4142-bdec-aeabe96436cb\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.826434 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2zvf\" (UniqueName: \"kubernetes.io/projected/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-kube-api-access-h2zvf\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.826468 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.928021 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.928097 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.928125 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d42993b1-8455-4142-bdec-aeabe96436cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d42993b1-8455-4142-bdec-aeabe96436cb\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.928161 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2zvf\" (UniqueName: \"kubernetes.io/projected/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-kube-api-access-h2zvf\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.928194 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.928222 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.928275 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.928336 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.928763 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.929493 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.930704 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.931558 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.935792 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.935820 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d42993b1-8455-4142-bdec-aeabe96436cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d42993b1-8455-4142-bdec-aeabe96436cb\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ecc16afcfb08d3e294674b5aabb1d2111b51ff7527b2043b17eabcde6ded2a05/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.938239 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.943443 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.960470 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2zvf\" (UniqueName: \"kubernetes.io/projected/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-kube-api-access-h2zvf\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.021614 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d42993b1-8455-4142-bdec-aeabe96436cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d42993b1-8455-4142-bdec-aeabe96436cb\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.085405 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.106088 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.107271 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.113982 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-6f59h" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.114046 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.114255 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.122090 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.134703 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-combined-ca-bundle\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.134749 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-config-data\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.134820 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-kolla-config\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.134865 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-memcached-tls-certs\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.134933 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5n54\" (UniqueName: \"kubernetes.io/projected/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-kube-api-access-x5n54\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.236664 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5n54\" (UniqueName: \"kubernetes.io/projected/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-kube-api-access-x5n54\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.236751 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-combined-ca-bundle\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.236774 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-config-data\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.236823 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-kolla-config\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.236891 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-memcached-tls-certs\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.239435 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-kolla-config\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.240023 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-config-data\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.259898 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-combined-ca-bundle\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.261605 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-memcached-tls-certs\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.295007 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5n54\" (UniqueName: \"kubernetes.io/projected/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-kube-api-access-x5n54\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.451555 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 21:35:54 crc kubenswrapper[4751]: I0130 21:35:54.129757 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:35:54 crc kubenswrapper[4751]: I0130 21:35:54.130096 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:35:54 crc kubenswrapper[4751]: I0130 21:35:54.130140 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:35:54 crc kubenswrapper[4751]: I0130 21:35:54.130894 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"589b659983c64eaeb9431668de4131b84f85d7d4aaf79c3e0b75a24b0812e09e"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:35:54 crc kubenswrapper[4751]: I0130 21:35:54.130943 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://589b659983c64eaeb9431668de4131b84f85d7d4aaf79c3e0b75a24b0812e09e" gracePeriod=600 Jan 30 21:35:54 crc kubenswrapper[4751]: I0130 21:35:54.701178 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="589b659983c64eaeb9431668de4131b84f85d7d4aaf79c3e0b75a24b0812e09e" exitCode=0 Jan 30 21:35:54 crc kubenswrapper[4751]: I0130 21:35:54.701235 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"589b659983c64eaeb9431668de4131b84f85d7d4aaf79c3e0b75a24b0812e09e"} Jan 30 21:35:54 crc kubenswrapper[4751]: I0130 21:35:54.701266 4751 scope.go:117] "RemoveContainer" containerID="4084bd2e19ec539ac0bc075f3b6a34007de80a7e632827590212d241d8cb0234" Jan 30 21:35:55 crc kubenswrapper[4751]: I0130 21:35:55.244685 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 21:35:55 crc kubenswrapper[4751]: I0130 21:35:55.247689 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 21:35:55 crc kubenswrapper[4751]: I0130 21:35:55.250838 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-qch6t" Jan 30 21:35:55 crc kubenswrapper[4751]: I0130 21:35:55.280407 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 21:35:55 crc kubenswrapper[4751]: I0130 21:35:55.312162 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5zjj\" (UniqueName: \"kubernetes.io/projected/67d207d6-2cd8-4679-919b-dedddeebd28d-kube-api-access-g5zjj\") pod \"kube-state-metrics-0\" (UID: \"67d207d6-2cd8-4679-919b-dedddeebd28d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:35:55 crc kubenswrapper[4751]: I0130 21:35:55.415178 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5zjj\" (UniqueName: \"kubernetes.io/projected/67d207d6-2cd8-4679-919b-dedddeebd28d-kube-api-access-g5zjj\") pod \"kube-state-metrics-0\" (UID: \"67d207d6-2cd8-4679-919b-dedddeebd28d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:35:55 crc kubenswrapper[4751]: I0130 21:35:55.445853 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5zjj\" (UniqueName: \"kubernetes.io/projected/67d207d6-2cd8-4679-919b-dedddeebd28d-kube-api-access-g5zjj\") pod \"kube-state-metrics-0\" (UID: \"67d207d6-2cd8-4679-919b-dedddeebd28d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:35:55 crc kubenswrapper[4751]: I0130 21:35:55.607766 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.059305 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc"] Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.061032 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.063747 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.063798 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-vlpbg" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.082118 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc"] Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.130590 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fpm5\" (UniqueName: \"kubernetes.io/projected/0d7cf074-b623-45d0-ac84-c1e52a626885-kube-api-access-2fpm5\") pod \"observability-ui-dashboards-66cbf594b5-p97jc\" (UID: \"0d7cf074-b623-45d0-ac84-c1e52a626885\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.130636 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d7cf074-b623-45d0-ac84-c1e52a626885-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-p97jc\" (UID: \"0d7cf074-b623-45d0-ac84-c1e52a626885\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.231929 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fpm5\" (UniqueName: \"kubernetes.io/projected/0d7cf074-b623-45d0-ac84-c1e52a626885-kube-api-access-2fpm5\") pod \"observability-ui-dashboards-66cbf594b5-p97jc\" (UID: \"0d7cf074-b623-45d0-ac84-c1e52a626885\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.231988 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d7cf074-b623-45d0-ac84-c1e52a626885-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-p97jc\" (UID: \"0d7cf074-b623-45d0-ac84-c1e52a626885\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.252455 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d7cf074-b623-45d0-ac84-c1e52a626885-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-p97jc\" (UID: \"0d7cf074-b623-45d0-ac84-c1e52a626885\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.258944 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fpm5\" (UniqueName: \"kubernetes.io/projected/0d7cf074-b623-45d0-ac84-c1e52a626885-kube-api-access-2fpm5\") pod \"observability-ui-dashboards-66cbf594b5-p97jc\" (UID: \"0d7cf074-b623-45d0-ac84-c1e52a626885\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.420925 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.446281 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.453458 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.463362 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.463522 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-hjfsj" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.463607 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.463676 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.463731 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.463754 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.468102 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.475601 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.491791 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-567c7bd4b5-dnfxs"] Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.493152 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.576178 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.626662 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-567c7bd4b5-dnfxs"] Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.647211 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d56430b1-227c-4074-8d43-86953ab9f911-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.647292 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-console-config\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.647487 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.647538 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-console-serving-cert\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.647605 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7297f1d7-6116-4005-9637-09e45a6844de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7297f1d7-6116-4005-9637-09e45a6844de\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.647676 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d56430b1-227c-4074-8d43-86953ab9f911-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.647701 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.647736 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2g5w\" (UniqueName: \"kubernetes.io/projected/d56430b1-227c-4074-8d43-86953ab9f911-kube-api-access-b2g5w\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.647771 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-service-ca\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.647791 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.647810 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.647832 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-console-oauth-config\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.647963 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-oauth-serving-cert\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.648006 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2njg6\" (UniqueName: \"kubernetes.io/projected/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-kube-api-access-2njg6\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.648483 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-trusted-ca-bundle\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.648534 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-config\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.648556 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.750559 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.750616 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d56430b1-227c-4074-8d43-86953ab9f911-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.750661 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-console-config\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.750697 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.750723 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-console-serving-cert\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.750760 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7297f1d7-6116-4005-9637-09e45a6844de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7297f1d7-6116-4005-9637-09e45a6844de\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.750799 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d56430b1-227c-4074-8d43-86953ab9f911-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.750827 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.750873 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2g5w\" (UniqueName: \"kubernetes.io/projected/d56430b1-227c-4074-8d43-86953ab9f911-kube-api-access-b2g5w\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.750916 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-service-ca\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.750943 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.750966 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.751000 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-console-oauth-config\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.751028 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-oauth-serving-cert\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.751053 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2njg6\" (UniqueName: \"kubernetes.io/projected/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-kube-api-access-2njg6\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.751080 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-trusted-ca-bundle\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.751114 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-config\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.756990 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-config\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.757838 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.758550 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.758713 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-console-config\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.759128 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-service-ca\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.759399 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.762826 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-oauth-serving-cert\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.765176 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-console-oauth-config\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.765686 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.766781 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.767074 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-console-serving-cert\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.768357 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d56430b1-227c-4074-8d43-86953ab9f911-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.769856 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d56430b1-227c-4074-8d43-86953ab9f911-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.769927 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-trusted-ca-bundle\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.772829 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2g5w\" (UniqueName: \"kubernetes.io/projected/d56430b1-227c-4074-8d43-86953ab9f911-kube-api-access-b2g5w\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.773170 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.773230 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7297f1d7-6116-4005-9637-09e45a6844de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7297f1d7-6116-4005-9637-09e45a6844de\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a95dfeb2de129561acc13a0d8e1495cdeeea1e8a0c06c82206df350d4e35d0bf/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.797003 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2njg6\" (UniqueName: \"kubernetes.io/projected/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-kube-api-access-2njg6\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.833090 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7297f1d7-6116-4005-9637-09e45a6844de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7297f1d7-6116-4005-9637-09e45a6844de\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.869799 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:57 crc kubenswrapper[4751]: I0130 21:35:57.078586 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.268892 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-g9s48"] Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.270354 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.272594 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.272791 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.273541 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-zzwbs" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.288719 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-g9s48"] Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.333824 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-f4rx8"] Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.336180 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.350425 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-f4rx8"] Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.387350 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fbc382fd-1513-4137-b801-5627cc5886ea-var-run-ovn\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.387445 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbc382fd-1513-4137-b801-5627cc5886ea-scripts\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.387515 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fbc382fd-1513-4137-b801-5627cc5886ea-var-run\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.387539 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fbc382fd-1513-4137-b801-5627cc5886ea-var-log-ovn\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.387566 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbc382fd-1513-4137-b801-5627cc5886ea-ovn-controller-tls-certs\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.387598 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbc382fd-1513-4137-b801-5627cc5886ea-combined-ca-bundle\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.387645 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gr6x\" (UniqueName: \"kubernetes.io/projected/fbc382fd-1513-4137-b801-5627cc5886ea-kube-api-access-2gr6x\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489059 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fbc382fd-1513-4137-b801-5627cc5886ea-var-run\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489114 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/071bab49-34f0-4fef-849e-c2530b4c423c-var-run\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489146 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fbc382fd-1513-4137-b801-5627cc5886ea-var-log-ovn\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489177 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbc382fd-1513-4137-b801-5627cc5886ea-ovn-controller-tls-certs\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489212 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbc382fd-1513-4137-b801-5627cc5886ea-combined-ca-bundle\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489266 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gr6x\" (UniqueName: \"kubernetes.io/projected/fbc382fd-1513-4137-b801-5627cc5886ea-kube-api-access-2gr6x\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489290 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/071bab49-34f0-4fef-849e-c2530b4c423c-scripts\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489343 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/071bab49-34f0-4fef-849e-c2530b4c423c-etc-ovs\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489409 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/071bab49-34f0-4fef-849e-c2530b4c423c-var-log\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489432 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpzxj\" (UniqueName: \"kubernetes.io/projected/071bab49-34f0-4fef-849e-c2530b4c423c-kube-api-access-vpzxj\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489468 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fbc382fd-1513-4137-b801-5627cc5886ea-var-run-ovn\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489492 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/071bab49-34f0-4fef-849e-c2530b4c423c-var-lib\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489613 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbc382fd-1513-4137-b801-5627cc5886ea-scripts\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489763 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fbc382fd-1513-4137-b801-5627cc5886ea-var-run\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489977 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fbc382fd-1513-4137-b801-5627cc5886ea-var-log-ovn\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.491450 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fbc382fd-1513-4137-b801-5627cc5886ea-var-run-ovn\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.495028 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbc382fd-1513-4137-b801-5627cc5886ea-scripts\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.496095 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbc382fd-1513-4137-b801-5627cc5886ea-ovn-controller-tls-certs\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.496883 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbc382fd-1513-4137-b801-5627cc5886ea-combined-ca-bundle\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.504581 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gr6x\" (UniqueName: \"kubernetes.io/projected/fbc382fd-1513-4137-b801-5627cc5886ea-kube-api-access-2gr6x\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.590785 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/071bab49-34f0-4fef-849e-c2530b4c423c-var-run\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.590865 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/071bab49-34f0-4fef-849e-c2530b4c423c-scripts\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.590888 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/071bab49-34f0-4fef-849e-c2530b4c423c-etc-ovs\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.590930 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/071bab49-34f0-4fef-849e-c2530b4c423c-var-log\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.590945 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpzxj\" (UniqueName: \"kubernetes.io/projected/071bab49-34f0-4fef-849e-c2530b4c423c-kube-api-access-vpzxj\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.590967 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/071bab49-34f0-4fef-849e-c2530b4c423c-var-lib\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.591234 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/071bab49-34f0-4fef-849e-c2530b4c423c-var-lib\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.591292 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/071bab49-34f0-4fef-849e-c2530b4c423c-var-run\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.591383 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/071bab49-34f0-4fef-849e-c2530b4c423c-var-log\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.591474 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/071bab49-34f0-4fef-849e-c2530b4c423c-etc-ovs\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.593385 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/071bab49-34f0-4fef-849e-c2530b4c423c-scripts\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.594864 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.626945 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpzxj\" (UniqueName: \"kubernetes.io/projected/071bab49-34f0-4fef-849e-c2530b4c423c-kube-api-access-vpzxj\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.654265 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:59 crc kubenswrapper[4751]: W0130 21:35:59.170654 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd55cd7e5_6799_4e1a_9f3b_a92937aca796.slice/crio-fcecc736431886bee24a53cf715abb5fcc3c8a76b85cf79ff3d524c5ded806ab WatchSource:0}: Error finding container fcecc736431886bee24a53cf715abb5fcc3c8a76b85cf79ff3d524c5ded806ab: Status 404 returned error can't find the container with id fcecc736431886bee24a53cf715abb5fcc3c8a76b85cf79ff3d524c5ded806ab Jan 30 21:35:59 crc kubenswrapper[4751]: I0130 21:35:59.772376 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d55cd7e5-6799-4e1a-9f3b-a92937aca796","Type":"ContainerStarted","Data":"fcecc736431886bee24a53cf715abb5fcc3c8a76b85cf79ff3d524c5ded806ab"} Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.592812 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.598439 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.601476 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-qmtjb" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.602358 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.602866 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.602994 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.603095 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.609029 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.632899 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8708be-4bf5-440d-a6e3-876acf844253-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.633029 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8708be-4bf5-440d-a6e3-876acf844253-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.633086 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f8708be-4bf5-440d-a6e3-876acf844253-config\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.633158 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1f8708be-4bf5-440d-a6e3-876acf844253-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.633253 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr4xt\" (UniqueName: \"kubernetes.io/projected/1f8708be-4bf5-440d-a6e3-876acf844253-kube-api-access-cr4xt\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.633290 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8b2a94e5-3eb6-4555-8d93-cc7723cb8e40\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8b2a94e5-3eb6-4555-8d93-cc7723cb8e40\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.633351 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f8708be-4bf5-440d-a6e3-876acf844253-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.633400 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8708be-4bf5-440d-a6e3-876acf844253-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.735002 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8708be-4bf5-440d-a6e3-876acf844253-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.735110 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8708be-4bf5-440d-a6e3-876acf844253-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.735153 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f8708be-4bf5-440d-a6e3-876acf844253-config\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.735209 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1f8708be-4bf5-440d-a6e3-876acf844253-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.735289 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr4xt\" (UniqueName: \"kubernetes.io/projected/1f8708be-4bf5-440d-a6e3-876acf844253-kube-api-access-cr4xt\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.735320 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8b2a94e5-3eb6-4555-8d93-cc7723cb8e40\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8b2a94e5-3eb6-4555-8d93-cc7723cb8e40\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.735382 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f8708be-4bf5-440d-a6e3-876acf844253-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.735420 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8708be-4bf5-440d-a6e3-876acf844253-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.737053 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1f8708be-4bf5-440d-a6e3-876acf844253-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.737364 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f8708be-4bf5-440d-a6e3-876acf844253-config\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.738089 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f8708be-4bf5-440d-a6e3-876acf844253-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.741823 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8708be-4bf5-440d-a6e3-876acf844253-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.749798 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.749846 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8b2a94e5-3eb6-4555-8d93-cc7723cb8e40\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8b2a94e5-3eb6-4555-8d93-cc7723cb8e40\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/417b1a1fd95e5f32699b4df2d9f46dae6df6c0c601710dee8734902dce1c54a9/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.753122 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8708be-4bf5-440d-a6e3-876acf844253-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.753201 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr4xt\" (UniqueName: \"kubernetes.io/projected/1f8708be-4bf5-440d-a6e3-876acf844253-kube-api-access-cr4xt\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.768729 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8708be-4bf5-440d-a6e3-876acf844253-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.781345 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.784459 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.790074 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-788lt" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.790354 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.792754 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.793188 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.793674 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.829116 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8b2a94e5-3eb6-4555-8d93-cc7723cb8e40\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8b2a94e5-3eb6-4555-8d93-cc7723cb8e40\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.936938 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.938185 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/47614a4a-f824-4eb4-9f46-bf1ab137d364-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.938253 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-baa0940c-eaeb-4e90-bbe7-803e984794dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-baa0940c-eaeb-4e90-bbe7-803e984794dd\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.938278 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47614a4a-f824-4eb4-9f46-bf1ab137d364-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.938312 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47614a4a-f824-4eb4-9f46-bf1ab137d364-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.938361 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zwk7\" (UniqueName: \"kubernetes.io/projected/47614a4a-f824-4eb4-9f46-bf1ab137d364-kube-api-access-9zwk7\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.938411 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47614a4a-f824-4eb4-9f46-bf1ab137d364-config\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.938431 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/47614a4a-f824-4eb4-9f46-bf1ab137d364-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.938471 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/47614a4a-f824-4eb4-9f46-bf1ab137d364-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.040230 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-baa0940c-eaeb-4e90-bbe7-803e984794dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-baa0940c-eaeb-4e90-bbe7-803e984794dd\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.040643 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47614a4a-f824-4eb4-9f46-bf1ab137d364-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.040847 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47614a4a-f824-4eb4-9f46-bf1ab137d364-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.040906 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zwk7\" (UniqueName: \"kubernetes.io/projected/47614a4a-f824-4eb4-9f46-bf1ab137d364-kube-api-access-9zwk7\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.040982 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47614a4a-f824-4eb4-9f46-bf1ab137d364-config\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.041008 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/47614a4a-f824-4eb4-9f46-bf1ab137d364-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.041067 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/47614a4a-f824-4eb4-9f46-bf1ab137d364-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.041098 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/47614a4a-f824-4eb4-9f46-bf1ab137d364-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.041654 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/47614a4a-f824-4eb4-9f46-bf1ab137d364-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.042110 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47614a4a-f824-4eb4-9f46-bf1ab137d364-config\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.042181 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47614a4a-f824-4eb4-9f46-bf1ab137d364-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.043022 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.043051 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-baa0940c-eaeb-4e90-bbe7-803e984794dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-baa0940c-eaeb-4e90-bbe7-803e984794dd\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3f316b7d9fa0f84292f0c71966aa685e799b7597ab7ed0c55bf4b1d203e6cb9d/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.045502 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/47614a4a-f824-4eb4-9f46-bf1ab137d364-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.045963 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47614a4a-f824-4eb4-9f46-bf1ab137d364-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.051362 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/47614a4a-f824-4eb4-9f46-bf1ab137d364-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.061252 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zwk7\" (UniqueName: \"kubernetes.io/projected/47614a4a-f824-4eb4-9f46-bf1ab137d364-kube-api-access-9zwk7\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.080791 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-baa0940c-eaeb-4e90-bbe7-803e984794dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-baa0940c-eaeb-4e90-bbe7-803e984794dd\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.141006 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:10 crc kubenswrapper[4751]: E0130 21:36:10.484994 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 30 21:36:10 crc kubenswrapper[4751]: E0130 21:36:10.485533 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8rt94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(f18b5d57-5b05-4ef0-bae3-68938e094510): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:36:10 crc kubenswrapper[4751]: E0130 21:36:10.489466 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="f18b5d57-5b05-4ef0-bae3-68938e094510" Jan 30 21:36:10 crc kubenswrapper[4751]: E0130 21:36:10.581078 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 30 21:36:10 crc kubenswrapper[4751]: E0130 21:36:10.581468 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hw2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(61d75daf-41cb-4ab5-b849-c98080ca748b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:36:10 crc kubenswrapper[4751]: E0130 21:36:10.582830 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="61d75daf-41cb-4ab5-b849-c98080ca748b" Jan 30 21:36:10 crc kubenswrapper[4751]: E0130 21:36:10.691029 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 30 21:36:10 crc kubenswrapper[4751]: E0130 21:36:10.691198 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zqvcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-1_openstack(2ed6288f-1f28-4189-a452-10ed3fa78c7f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:36:10 crc kubenswrapper[4751]: E0130 21:36:10.692430 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-1" podUID="2ed6288f-1f28-4189-a452-10ed3fa78c7f" Jan 30 21:36:17 crc kubenswrapper[4751]: I0130 21:36:17.189855 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-567c7bd4b5-dnfxs"] Jan 30 21:36:17 crc kubenswrapper[4751]: E0130 21:36:17.830887 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 21:36:17 crc kubenswrapper[4751]: E0130 21:36:17.831622 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvr5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-whdw4_openstack(d3d45f11-44b0-4b38-b308-c99c83e52e6b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:36:17 crc kubenswrapper[4751]: E0130 21:36:17.832838 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-whdw4" podUID="d3d45f11-44b0-4b38-b308-c99c83e52e6b" Jan 30 21:36:17 crc kubenswrapper[4751]: E0130 21:36:17.833504 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 21:36:17 crc kubenswrapper[4751]: E0130 21:36:17.833609 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fgm5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-628lt_openstack(33135688-6f3e-426e-be2b-0e455d6736e6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:36:17 crc kubenswrapper[4751]: E0130 21:36:17.834879 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-628lt" podUID="33135688-6f3e-426e-be2b-0e455d6736e6" Jan 30 21:36:17 crc kubenswrapper[4751]: E0130 21:36:17.923514 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 21:36:17 crc kubenswrapper[4751]: E0130 21:36:17.923905 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bpc4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-5w7xw_openstack(ef9f34c1-a280-43a3-a78b-6a10c2972759): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:36:17 crc kubenswrapper[4751]: E0130 21:36:17.925725 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" podUID="ef9f34c1-a280-43a3-a78b-6a10c2972759" Jan 30 21:36:17 crc kubenswrapper[4751]: E0130 21:36:17.935846 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 21:36:17 crc kubenswrapper[4751]: E0130 21:36:17.935977 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pmqld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-8tkr2_openstack(a28e93be-b42f-4075-9092-349b11c825bb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:36:17 crc kubenswrapper[4751]: E0130 21:36:17.939979 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-8tkr2" podUID="a28e93be-b42f-4075-9092-349b11c825bb" Jan 30 21:36:18 crc kubenswrapper[4751]: E0130 21:36:18.022527 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-628lt" podUID="33135688-6f3e-426e-be2b-0e455d6736e6" Jan 30 21:36:18 crc kubenswrapper[4751]: E0130 21:36:18.022806 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-whdw4" podUID="d3d45f11-44b0-4b38-b308-c99c83e52e6b" Jan 30 21:36:18 crc kubenswrapper[4751]: I0130 21:36:18.101513 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-567c7bd4b5-dnfxs" event={"ID":"3d38bf98-a3aa-46b1-ac58-309f83d20bbb","Type":"ContainerStarted","Data":"a537cc6d1a4f5b6006c4775c057a3275f3eb40ec990f4c8224cdc294744a4571"} Jan 30 21:36:18 crc kubenswrapper[4751]: I0130 21:36:18.994781 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-567c7bd4b5-dnfxs" event={"ID":"3d38bf98-a3aa-46b1-ac58-309f83d20bbb","Type":"ContainerStarted","Data":"82606cceefd5a69bcd58f56f2b7aa736bc30a7c8262640380b5522dd54432246"} Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.000837 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d55cd7e5-6799-4e1a-9f3b-a92937aca796","Type":"ContainerStarted","Data":"b41336e87950b050089b8d0b576106edc16f6aafa733c3c6906a17f623e03fa0"} Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.005633 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88"} Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.006463 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-g9s48"] Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.016883 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.024274 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.034172 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-567c7bd4b5-dnfxs" podStartSLOduration=23.03415532 podStartE2EDuration="23.03415532s" podCreationTimestamp="2026-01-30 21:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:19.020797223 +0000 UTC m=+1317.766619872" watchObservedRunningTime="2026-01-30 21:36:19.03415532 +0000 UTC m=+1317.779977969" Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.099055 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.220276 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc"] Jan 30 21:36:19 crc kubenswrapper[4751]: W0130 21:36:19.224419 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d7cf074_b623_45d0_ac84_c1e52a626885.slice/crio-b51f1d2fc02ae142d535c4c594ec2589285cd615fe0c9a7575d7f39eff05eb02 WatchSource:0}: Error finding container b51f1d2fc02ae142d535c4c594ec2589285cd615fe0c9a7575d7f39eff05eb02: Status 404 returned error can't find the container with id b51f1d2fc02ae142d535c4c594ec2589285cd615fe0c9a7575d7f39eff05eb02 Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.386647 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.431818 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 21:36:19 crc kubenswrapper[4751]: W0130 21:36:19.590696 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd56430b1_227c_4074_8d43_86953ab9f911.slice/crio-14a262a32c578ab480de0003e92d828da04b4354e1d5c9b7efbfca95d406a828 WatchSource:0}: Error finding container 14a262a32c578ab480de0003e92d828da04b4354e1d5c9b7efbfca95d406a828: Status 404 returned error can't find the container with id 14a262a32c578ab480de0003e92d828da04b4354e1d5c9b7efbfca95d406a828 Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.731968 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8tkr2" Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.733831 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.739652 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" Jan 30 21:36:19 crc kubenswrapper[4751]: W0130 21:36:19.800523 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f8708be_4bf5_440d_a6e3_876acf844253.slice/crio-2a1d79f1a4427b17d3048fce28839bbbeb15047fcb8e8523dd0b3a8f9060a86b WatchSource:0}: Error finding container 2a1d79f1a4427b17d3048fce28839bbbeb15047fcb8e8523dd0b3a8f9060a86b: Status 404 returned error can't find the container with id 2a1d79f1a4427b17d3048fce28839bbbeb15047fcb8e8523dd0b3a8f9060a86b Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.837118 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmqld\" (UniqueName: \"kubernetes.io/projected/a28e93be-b42f-4075-9092-349b11c825bb-kube-api-access-pmqld\") pod \"a28e93be-b42f-4075-9092-349b11c825bb\" (UID: \"a28e93be-b42f-4075-9092-349b11c825bb\") " Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.837189 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a28e93be-b42f-4075-9092-349b11c825bb-config\") pod \"a28e93be-b42f-4075-9092-349b11c825bb\" (UID: \"a28e93be-b42f-4075-9092-349b11c825bb\") " Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.837274 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef9f34c1-a280-43a3-a78b-6a10c2972759-config\") pod \"ef9f34c1-a280-43a3-a78b-6a10c2972759\" (UID: \"ef9f34c1-a280-43a3-a78b-6a10c2972759\") " Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.837397 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpc4c\" (UniqueName: \"kubernetes.io/projected/ef9f34c1-a280-43a3-a78b-6a10c2972759-kube-api-access-bpc4c\") pod \"ef9f34c1-a280-43a3-a78b-6a10c2972759\" (UID: \"ef9f34c1-a280-43a3-a78b-6a10c2972759\") " Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.837458 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef9f34c1-a280-43a3-a78b-6a10c2972759-dns-svc\") pod \"ef9f34c1-a280-43a3-a78b-6a10c2972759\" (UID: \"ef9f34c1-a280-43a3-a78b-6a10c2972759\") " Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.837767 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef9f34c1-a280-43a3-a78b-6a10c2972759-config" (OuterVolumeSpecName: "config") pod "ef9f34c1-a280-43a3-a78b-6a10c2972759" (UID: "ef9f34c1-a280-43a3-a78b-6a10c2972759"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.837799 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a28e93be-b42f-4075-9092-349b11c825bb-config" (OuterVolumeSpecName: "config") pod "a28e93be-b42f-4075-9092-349b11c825bb" (UID: "a28e93be-b42f-4075-9092-349b11c825bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.838082 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef9f34c1-a280-43a3-a78b-6a10c2972759-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ef9f34c1-a280-43a3-a78b-6a10c2972759" (UID: "ef9f34c1-a280-43a3-a78b-6a10c2972759"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.838288 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef9f34c1-a280-43a3-a78b-6a10c2972759-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.838308 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a28e93be-b42f-4075-9092-349b11c825bb-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.838317 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef9f34c1-a280-43a3-a78b-6a10c2972759-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.884606 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a28e93be-b42f-4075-9092-349b11c825bb-kube-api-access-pmqld" (OuterVolumeSpecName: "kube-api-access-pmqld") pod "a28e93be-b42f-4075-9092-349b11c825bb" (UID: "a28e93be-b42f-4075-9092-349b11c825bb"). InnerVolumeSpecName "kube-api-access-pmqld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.884924 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef9f34c1-a280-43a3-a78b-6a10c2972759-kube-api-access-bpc4c" (OuterVolumeSpecName: "kube-api-access-bpc4c") pod "ef9f34c1-a280-43a3-a78b-6a10c2972759" (UID: "ef9f34c1-a280-43a3-a78b-6a10c2972759"). InnerVolumeSpecName "kube-api-access-bpc4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.939989 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpc4c\" (UniqueName: \"kubernetes.io/projected/ef9f34c1-a280-43a3-a78b-6a10c2972759-kube-api-access-bpc4c\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.940029 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmqld\" (UniqueName: \"kubernetes.io/projected/a28e93be-b42f-4075-9092-349b11c825bb-kube-api-access-pmqld\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.037691 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1f8708be-4bf5-440d-a6e3-876acf844253","Type":"ContainerStarted","Data":"2a1d79f1a4427b17d3048fce28839bbbeb15047fcb8e8523dd0b3a8f9060a86b"} Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.046288 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"14c5f0f0-6d85-4d60-9daa-7fa3b401a884","Type":"ContainerStarted","Data":"69f55323b26a6cdd15f489b8bab8fa2c94d373221ee4208573c2dc948afeb570"} Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.048080 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc" event={"ID":"0d7cf074-b623-45d0-ac84-c1e52a626885","Type":"ContainerStarted","Data":"b51f1d2fc02ae142d535c4c594ec2589285cd615fe0c9a7575d7f39eff05eb02"} Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.049194 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g9s48" event={"ID":"fbc382fd-1513-4137-b801-5627cc5886ea","Type":"ContainerStarted","Data":"a6330356b450bf43fea6bed9b3391f5252032f25720121c5607581805f1db4ed"} Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.050495 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8tkr2" event={"ID":"a28e93be-b42f-4075-9092-349b11c825bb","Type":"ContainerDied","Data":"93642ef2b0df57668df0e7cd91eb49b59436760eab9b26ed0b040d59521f1d2c"} Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.050553 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8tkr2" Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.053185 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"61d75daf-41cb-4ab5-b849-c98080ca748b","Type":"ContainerStarted","Data":"cf3b264e8ec141124dc8cea806067e0197228587097f1a72076d1d5e3beee32f"} Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.056575 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f18b5d57-5b05-4ef0-bae3-68938e094510","Type":"ContainerStarted","Data":"754b224d6c0aec71e7dd9667dbd15b0273b071f0dbfebe749ccad88991070256"} Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.058102 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d56430b1-227c-4074-8d43-86953ab9f911","Type":"ContainerStarted","Data":"14a262a32c578ab480de0003e92d828da04b4354e1d5c9b7efbfca95d406a828"} Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.060764 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" event={"ID":"ef9f34c1-a280-43a3-a78b-6a10c2972759","Type":"ContainerDied","Data":"db81d7e720a3f967946b93c7cdb416134679dd7e6418e2fc62f067e92c234fe4"} Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.060812 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.066543 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"2ed6288f-1f28-4189-a452-10ed3fa78c7f","Type":"ContainerStarted","Data":"dc43aef27eee6e5555871ea3e140a0c234f05afe3ded956404826b8a2999ed23"} Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.071616 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"67d207d6-2cd8-4679-919b-dedddeebd28d","Type":"ContainerStarted","Data":"3bb2d0d293bcca63ced4a6eec87e280101ac65a5555311aa13f1e064ca31af8e"} Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.092845 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32","Type":"ContainerStarted","Data":"579631c272df8c432bd7df8c2f2c3693effbf544fdbdee73f85ac0888ded0450"} Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.092994 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32","Type":"ContainerStarted","Data":"0e6eb1d3860c9b9375ea706f9f68de934a1efb81362d1bedbd70261ef0caab70"} Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.205157 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5w7xw"] Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.215573 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5w7xw"] Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.255381 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8tkr2"] Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.266158 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8tkr2"] Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.776711 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 21:36:20 crc kubenswrapper[4751]: W0130 21:36:20.833501 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47614a4a_f824_4eb4_9f46_bf1ab137d364.slice/crio-44dc5bc2f5c75e22d2cba7e160c581d67e6761e56e074418b1bb50f482c20098 WatchSource:0}: Error finding container 44dc5bc2f5c75e22d2cba7e160c581d67e6761e56e074418b1bb50f482c20098: Status 404 returned error can't find the container with id 44dc5bc2f5c75e22d2cba7e160c581d67e6761e56e074418b1bb50f482c20098 Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.874290 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-f4rx8"] Jan 30 21:36:21 crc kubenswrapper[4751]: I0130 21:36:21.105257 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"47614a4a-f824-4eb4-9f46-bf1ab137d364","Type":"ContainerStarted","Data":"44dc5bc2f5c75e22d2cba7e160c581d67e6761e56e074418b1bb50f482c20098"} Jan 30 21:36:21 crc kubenswrapper[4751]: I0130 21:36:21.107051 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"192a5913-0c28-4214-9ac0-d37ca2eeb34c","Type":"ContainerStarted","Data":"6a2c138626ec1f6b7d91772998275ab4f054944271024ad8876c0420d7d4bbc9"} Jan 30 21:36:21 crc kubenswrapper[4751]: W0130 21:36:21.566559 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod071bab49_34f0_4fef_849e_c2530b4c423c.slice/crio-68b8075a80c57b4dc37cb7c0152a8a45188bad145d4bb1bfb58ae42e790178f3 WatchSource:0}: Error finding container 68b8075a80c57b4dc37cb7c0152a8a45188bad145d4bb1bfb58ae42e790178f3: Status 404 returned error can't find the container with id 68b8075a80c57b4dc37cb7c0152a8a45188bad145d4bb1bfb58ae42e790178f3 Jan 30 21:36:21 crc kubenswrapper[4751]: I0130 21:36:21.996491 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a28e93be-b42f-4075-9092-349b11c825bb" path="/var/lib/kubelet/pods/a28e93be-b42f-4075-9092-349b11c825bb/volumes" Jan 30 21:36:21 crc kubenswrapper[4751]: I0130 21:36:21.997343 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef9f34c1-a280-43a3-a78b-6a10c2972759" path="/var/lib/kubelet/pods/ef9f34c1-a280-43a3-a78b-6a10c2972759/volumes" Jan 30 21:36:22 crc kubenswrapper[4751]: I0130 21:36:22.125857 4751 generic.go:334] "Generic (PLEG): container finished" podID="d55cd7e5-6799-4e1a-9f3b-a92937aca796" containerID="b41336e87950b050089b8d0b576106edc16f6aafa733c3c6906a17f623e03fa0" exitCode=0 Jan 30 21:36:22 crc kubenswrapper[4751]: I0130 21:36:22.126016 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d55cd7e5-6799-4e1a-9f3b-a92937aca796","Type":"ContainerDied","Data":"b41336e87950b050089b8d0b576106edc16f6aafa733c3c6906a17f623e03fa0"} Jan 30 21:36:22 crc kubenswrapper[4751]: I0130 21:36:22.128429 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f4rx8" event={"ID":"071bab49-34f0-4fef-849e-c2530b4c423c","Type":"ContainerStarted","Data":"68b8075a80c57b4dc37cb7c0152a8a45188bad145d4bb1bfb58ae42e790178f3"} Jan 30 21:36:24 crc kubenswrapper[4751]: I0130 21:36:24.152419 4751 generic.go:334] "Generic (PLEG): container finished" podID="a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32" containerID="579631c272df8c432bd7df8c2f2c3693effbf544fdbdee73f85ac0888ded0450" exitCode=0 Jan 30 21:36:24 crc kubenswrapper[4751]: I0130 21:36:24.152685 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32","Type":"ContainerDied","Data":"579631c272df8c432bd7df8c2f2c3693effbf544fdbdee73f85ac0888ded0450"} Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.171542 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc" event={"ID":"0d7cf074-b623-45d0-ac84-c1e52a626885","Type":"ContainerStarted","Data":"567363d1a20e1743552dc1ae55168c90c044abbcc885a33caf9ff900c535d100"} Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.173110 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"14c5f0f0-6d85-4d60-9daa-7fa3b401a884","Type":"ContainerStarted","Data":"ef01dd13651cb8612a3cea1fc4418d761472f371dfd1a8a92ded40a74991eefb"} Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.173249 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.175364 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"67d207d6-2cd8-4679-919b-dedddeebd28d","Type":"ContainerStarted","Data":"c8daadd27b9052e4c383910cfe816522e7df6c5dba304b05d5d1d591c21b393e"} Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.175467 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.176977 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1f8708be-4bf5-440d-a6e3-876acf844253","Type":"ContainerStarted","Data":"7032f4ae7198894a0049096b3a54cbb586bf347b9dfaf23c0a7ed0644c1a5952"} Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.178987 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"47614a4a-f824-4eb4-9f46-bf1ab137d364","Type":"ContainerStarted","Data":"3d099065e961e6db517f1df053408df8047fc234a2320c57772d3fb26b65c47e"} Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.183838 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32","Type":"ContainerStarted","Data":"0678b0f1f04488f41d09d2691fbeb7d1630138970ed74140f21c85b66d911f15"} Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.188611 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d55cd7e5-6799-4e1a-9f3b-a92937aca796","Type":"ContainerStarted","Data":"3cbe6016f28e4a7af5409f18e256b5913f2a3067820734d5757c5766433f5586"} Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.203640 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc" podStartSLOduration=25.767538459 podStartE2EDuration="31.203613859s" podCreationTimestamp="2026-01-30 21:35:55 +0000 UTC" firstStartedPulling="2026-01-30 21:36:19.227171661 +0000 UTC m=+1317.972994310" lastFinishedPulling="2026-01-30 21:36:24.663247061 +0000 UTC m=+1323.409069710" observedRunningTime="2026-01-30 21:36:26.192188454 +0000 UTC m=+1324.938011153" watchObservedRunningTime="2026-01-30 21:36:26.203613859 +0000 UTC m=+1324.949436508" Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.244957 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=35.244931082 podStartE2EDuration="35.244931082s" podCreationTimestamp="2026-01-30 21:35:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:26.229689905 +0000 UTC m=+1324.975512574" watchObservedRunningTime="2026-01-30 21:36:26.244931082 +0000 UTC m=+1324.990753751" Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.283994 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=25.488186544 podStartE2EDuration="31.283977334s" podCreationTimestamp="2026-01-30 21:35:55 +0000 UTC" firstStartedPulling="2026-01-30 21:36:19.488242858 +0000 UTC m=+1318.234065507" lastFinishedPulling="2026-01-30 21:36:25.284033648 +0000 UTC m=+1324.029856297" observedRunningTime="2026-01-30 21:36:26.283612414 +0000 UTC m=+1325.029435063" watchObservedRunningTime="2026-01-30 21:36:26.283977334 +0000 UTC m=+1325.029799983" Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.284697 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=28.128779157 podStartE2EDuration="33.284684622s" podCreationTimestamp="2026-01-30 21:35:53 +0000 UTC" firstStartedPulling="2026-01-30 21:36:19.098692972 +0000 UTC m=+1317.844515621" lastFinishedPulling="2026-01-30 21:36:24.254598437 +0000 UTC m=+1323.000421086" observedRunningTime="2026-01-30 21:36:26.262214833 +0000 UTC m=+1325.008037482" watchObservedRunningTime="2026-01-30 21:36:26.284684622 +0000 UTC m=+1325.030507271" Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.326696 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=17.589310552 podStartE2EDuration="36.326672193s" podCreationTimestamp="2026-01-30 21:35:50 +0000 UTC" firstStartedPulling="2026-01-30 21:35:59.173312737 +0000 UTC m=+1297.919135386" lastFinishedPulling="2026-01-30 21:36:17.910674378 +0000 UTC m=+1316.656497027" observedRunningTime="2026-01-30 21:36:26.31378747 +0000 UTC m=+1325.059610119" watchObservedRunningTime="2026-01-30 21:36:26.326672193 +0000 UTC m=+1325.072494842" Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.870752 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.872189 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.878827 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:36:27 crc kubenswrapper[4751]: I0130 21:36:27.198205 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g9s48" event={"ID":"fbc382fd-1513-4137-b801-5627cc5886ea","Type":"ContainerStarted","Data":"bef48e968c54383f22b1749cb484521a899cbe28a538a43a6d252f3eb1f25a25"} Jan 30 21:36:27 crc kubenswrapper[4751]: I0130 21:36:27.198583 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-g9s48" Jan 30 21:36:27 crc kubenswrapper[4751]: I0130 21:36:27.200310 4751 generic.go:334] "Generic (PLEG): container finished" podID="071bab49-34f0-4fef-849e-c2530b4c423c" containerID="ffa2573ac7dc5b9bd71b4e537625e7c9a8c61dbd83a867ec7dfaee0cf9f5eb00" exitCode=0 Jan 30 21:36:27 crc kubenswrapper[4751]: I0130 21:36:27.200474 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f4rx8" event={"ID":"071bab49-34f0-4fef-849e-c2530b4c423c","Type":"ContainerDied","Data":"ffa2573ac7dc5b9bd71b4e537625e7c9a8c61dbd83a867ec7dfaee0cf9f5eb00"} Jan 30 21:36:27 crc kubenswrapper[4751]: I0130 21:36:27.205132 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:36:27 crc kubenswrapper[4751]: I0130 21:36:27.217858 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-g9s48" podStartSLOduration=23.147907578 podStartE2EDuration="29.217838935s" podCreationTimestamp="2026-01-30 21:35:58 +0000 UTC" firstStartedPulling="2026-01-30 21:36:19.099045391 +0000 UTC m=+1317.844868040" lastFinishedPulling="2026-01-30 21:36:25.168976748 +0000 UTC m=+1323.914799397" observedRunningTime="2026-01-30 21:36:27.215220935 +0000 UTC m=+1325.961043614" watchObservedRunningTime="2026-01-30 21:36:27.217838935 +0000 UTC m=+1325.963661584" Jan 30 21:36:27 crc kubenswrapper[4751]: I0130 21:36:27.284719 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6b64b75d5d-kgc46"] Jan 30 21:36:28 crc kubenswrapper[4751]: I0130 21:36:28.209614 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1f8708be-4bf5-440d-a6e3-876acf844253","Type":"ContainerStarted","Data":"ce0bb679d4e618dfb33a5cd9fdea7bfd89b4de261b65a7141d120dab35183b8a"} Jan 30 21:36:28 crc kubenswrapper[4751]: I0130 21:36:28.213030 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"47614a4a-f824-4eb4-9f46-bf1ab137d364","Type":"ContainerStarted","Data":"285fed604b34fa9ce63507a50b19e78645ad9d9b08e00bc6030cd353a6955aa7"} Jan 30 21:36:28 crc kubenswrapper[4751]: I0130 21:36:28.216222 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f4rx8" event={"ID":"071bab49-34f0-4fef-849e-c2530b4c423c","Type":"ContainerStarted","Data":"718efc89487f0308768571832008d2392b357dc679fe87f9ac770fad001e8f1c"} Jan 30 21:36:28 crc kubenswrapper[4751]: I0130 21:36:28.240952 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=19.541837156 podStartE2EDuration="27.240936259s" podCreationTimestamp="2026-01-30 21:36:01 +0000 UTC" firstStartedPulling="2026-01-30 21:36:19.803747948 +0000 UTC m=+1318.549570597" lastFinishedPulling="2026-01-30 21:36:27.502847051 +0000 UTC m=+1326.248669700" observedRunningTime="2026-01-30 21:36:28.235405542 +0000 UTC m=+1326.981228241" watchObservedRunningTime="2026-01-30 21:36:28.240936259 +0000 UTC m=+1326.986758898" Jan 30 21:36:28 crc kubenswrapper[4751]: I0130 21:36:28.267482 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=20.603566559 podStartE2EDuration="27.267461057s" podCreationTimestamp="2026-01-30 21:36:01 +0000 UTC" firstStartedPulling="2026-01-30 21:36:20.835850281 +0000 UTC m=+1319.581672930" lastFinishedPulling="2026-01-30 21:36:27.499744769 +0000 UTC m=+1326.245567428" observedRunningTime="2026-01-30 21:36:28.260133141 +0000 UTC m=+1327.005955780" watchObservedRunningTime="2026-01-30 21:36:28.267461057 +0000 UTC m=+1327.013283706" Jan 30 21:36:28 crc kubenswrapper[4751]: E0130 21:36:28.932832 4751 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.39:48648->38.102.83.39:41127: write tcp 38.102.83.39:48648->38.102.83.39:41127: write: broken pipe Jan 30 21:36:29 crc kubenswrapper[4751]: I0130 21:36:29.233512 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f4rx8" event={"ID":"071bab49-34f0-4fef-849e-c2530b4c423c","Type":"ContainerStarted","Data":"378619f77f6f87c9795145256b5000917b55722ce6e850bbc4ec0ff90843608b"} Jan 30 21:36:29 crc kubenswrapper[4751]: I0130 21:36:29.234303 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:36:29 crc kubenswrapper[4751]: I0130 21:36:29.234348 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:36:29 crc kubenswrapper[4751]: I0130 21:36:29.236923 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d56430b1-227c-4074-8d43-86953ab9f911","Type":"ContainerStarted","Data":"0e8380c6ff95a924287e8674599018ad6d281082245c17624c192e7eea73966f"} Jan 30 21:36:29 crc kubenswrapper[4751]: I0130 21:36:29.257912 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-f4rx8" podStartSLOduration=27.656064988 podStartE2EDuration="31.257895748s" podCreationTimestamp="2026-01-30 21:35:58 +0000 UTC" firstStartedPulling="2026-01-30 21:36:21.568678868 +0000 UTC m=+1320.314501517" lastFinishedPulling="2026-01-30 21:36:25.170509588 +0000 UTC m=+1323.916332277" observedRunningTime="2026-01-30 21:36:29.252214517 +0000 UTC m=+1327.998037166" watchObservedRunningTime="2026-01-30 21:36:29.257895748 +0000 UTC m=+1328.003718397" Jan 30 21:36:29 crc kubenswrapper[4751]: I0130 21:36:29.937888 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.010374 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.141894 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.208012 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.244695 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.244774 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.308759 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.635747 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-628lt"] Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.703335 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-gcttq"] Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.705164 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.714678 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.722834 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-gcttq"] Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.734896 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-6bddb"] Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.736306 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.740575 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.759515 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6bddb"] Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.784833 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-config\") pod \"dnsmasq-dns-5bf47b49b7-gcttq\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.785092 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e60cf673-3513-4af6-ac72-280908e95405-ovs-rundir\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.785170 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e60cf673-3513-4af6-ac72-280908e95405-config\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.785279 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e60cf673-3513-4af6-ac72-280908e95405-ovn-rundir\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.785351 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e60cf673-3513-4af6-ac72-280908e95405-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.785465 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr2tf\" (UniqueName: \"kubernetes.io/projected/6f39785c-2919-4c29-8405-fd314710c587-kube-api-access-kr2tf\") pod \"dnsmasq-dns-5bf47b49b7-gcttq\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.785515 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-gcttq\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.785561 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e60cf673-3513-4af6-ac72-280908e95405-combined-ca-bundle\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.785592 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-gcttq\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.785651 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cd44\" (UniqueName: \"kubernetes.io/projected/e60cf673-3513-4af6-ac72-280908e95405-kube-api-access-8cd44\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.871945 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-whdw4"] Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.889408 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-gcttq\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.889457 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-gcttq\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.889477 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e60cf673-3513-4af6-ac72-280908e95405-combined-ca-bundle\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.889504 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cd44\" (UniqueName: \"kubernetes.io/projected/e60cf673-3513-4af6-ac72-280908e95405-kube-api-access-8cd44\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.889538 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-config\") pod \"dnsmasq-dns-5bf47b49b7-gcttq\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.889616 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e60cf673-3513-4af6-ac72-280908e95405-ovs-rundir\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.889640 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e60cf673-3513-4af6-ac72-280908e95405-config\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.889682 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e60cf673-3513-4af6-ac72-280908e95405-ovn-rundir\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.889706 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e60cf673-3513-4af6-ac72-280908e95405-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.889748 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr2tf\" (UniqueName: \"kubernetes.io/projected/6f39785c-2919-4c29-8405-fd314710c587-kube-api-access-kr2tf\") pod \"dnsmasq-dns-5bf47b49b7-gcttq\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.890563 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e60cf673-3513-4af6-ac72-280908e95405-ovn-rundir\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.890639 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e60cf673-3513-4af6-ac72-280908e95405-ovs-rundir\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.890737 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e60cf673-3513-4af6-ac72-280908e95405-config\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.890768 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-gcttq\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.890854 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-config\") pod \"dnsmasq-dns-5bf47b49b7-gcttq\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.891283 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-gcttq\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.896674 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e60cf673-3513-4af6-ac72-280908e95405-combined-ca-bundle\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.900919 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e60cf673-3513-4af6-ac72-280908e95405-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.924632 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-pl94b"] Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.926260 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.929529 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr2tf\" (UniqueName: \"kubernetes.io/projected/6f39785c-2919-4c29-8405-fd314710c587-kube-api-access-kr2tf\") pod \"dnsmasq-dns-5bf47b49b7-gcttq\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.937633 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.939166 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cd44\" (UniqueName: \"kubernetes.io/projected/e60cf673-3513-4af6-ac72-280908e95405-kube-api-access-8cd44\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.968409 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-pl94b"] Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.993138 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.993228 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.993261 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-config\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.993303 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk2gf\" (UniqueName: \"kubernetes.io/projected/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-kube-api-access-hk2gf\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.993427 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-dns-svc\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.031631 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.061177 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.095441 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.095514 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-config\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.095587 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk2gf\" (UniqueName: \"kubernetes.io/projected/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-kube-api-access-hk2gf\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.095610 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-dns-svc\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.095686 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.096685 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-config\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.097772 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-dns-svc\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.097794 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.097951 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.125880 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk2gf\" (UniqueName: \"kubernetes.io/projected/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-kube-api-access-hk2gf\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.274981 4751 generic.go:334] "Generic (PLEG): container finished" podID="33135688-6f3e-426e-be2b-0e455d6736e6" containerID="9c47ecbe42a77dd1b41021da0eb6f61ea6d658a2d38393fa7a8f216c5d640c6d" exitCode=0 Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.278125 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-628lt" event={"ID":"33135688-6f3e-426e-be2b-0e455d6736e6","Type":"ContainerDied","Data":"9c47ecbe42a77dd1b41021da0eb6f61ea6d658a2d38393fa7a8f216c5d640c6d"} Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.294799 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.298529 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-whdw4" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.355614 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.420566 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3d45f11-44b0-4b38-b308-c99c83e52e6b-dns-svc\") pod \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\" (UID: \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\") " Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.420688 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvr5f\" (UniqueName: \"kubernetes.io/projected/d3d45f11-44b0-4b38-b308-c99c83e52e6b-kube-api-access-gvr5f\") pod \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\" (UID: \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\") " Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.420820 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d45f11-44b0-4b38-b308-c99c83e52e6b-config\") pod \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\" (UID: \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\") " Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.422337 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3d45f11-44b0-4b38-b308-c99c83e52e6b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d3d45f11-44b0-4b38-b308-c99c83e52e6b" (UID: "d3d45f11-44b0-4b38-b308-c99c83e52e6b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.423675 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3d45f11-44b0-4b38-b308-c99c83e52e6b-config" (OuterVolumeSpecName: "config") pod "d3d45f11-44b0-4b38-b308-c99c83e52e6b" (UID: "d3d45f11-44b0-4b38-b308-c99c83e52e6b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.427510 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3d45f11-44b0-4b38-b308-c99c83e52e6b-kube-api-access-gvr5f" (OuterVolumeSpecName: "kube-api-access-gvr5f") pod "d3d45f11-44b0-4b38-b308-c99c83e52e6b" (UID: "d3d45f11-44b0-4b38-b308-c99c83e52e6b"). InnerVolumeSpecName "kube-api-access-gvr5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.526845 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3d45f11-44b0-4b38-b308-c99c83e52e6b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.526879 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvr5f\" (UniqueName: \"kubernetes.io/projected/d3d45f11-44b0-4b38-b308-c99c83e52e6b-kube-api-access-gvr5f\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.526892 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d45f11-44b0-4b38-b308-c99c83e52e6b-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.559251 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.561040 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.564196 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-st2jm" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.564318 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.564530 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.564628 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.574122 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.628656 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31a7def-755f-49e8-bf97-7e155bcc5113-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.629005 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f31a7def-755f-49e8-bf97-7e155bcc5113-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.629063 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctwkz\" (UniqueName: \"kubernetes.io/projected/f31a7def-755f-49e8-bf97-7e155bcc5113-kube-api-access-ctwkz\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.629107 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f31a7def-755f-49e8-bf97-7e155bcc5113-scripts\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.629169 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f31a7def-755f-49e8-bf97-7e155bcc5113-config\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.629213 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31a7def-755f-49e8-bf97-7e155bcc5113-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.629289 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31a7def-755f-49e8-bf97-7e155bcc5113-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.731131 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f31a7def-755f-49e8-bf97-7e155bcc5113-scripts\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.731188 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f31a7def-755f-49e8-bf97-7e155bcc5113-config\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.731207 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31a7def-755f-49e8-bf97-7e155bcc5113-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.731270 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31a7def-755f-49e8-bf97-7e155bcc5113-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.731292 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31a7def-755f-49e8-bf97-7e155bcc5113-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.731364 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f31a7def-755f-49e8-bf97-7e155bcc5113-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.731400 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctwkz\" (UniqueName: \"kubernetes.io/projected/f31a7def-755f-49e8-bf97-7e155bcc5113-kube-api-access-ctwkz\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.731968 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f31a7def-755f-49e8-bf97-7e155bcc5113-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.732618 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f31a7def-755f-49e8-bf97-7e155bcc5113-scripts\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.732926 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f31a7def-755f-49e8-bf97-7e155bcc5113-config\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.736350 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31a7def-755f-49e8-bf97-7e155bcc5113-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.736430 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31a7def-755f-49e8-bf97-7e155bcc5113-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.737354 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31a7def-755f-49e8-bf97-7e155bcc5113-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.760066 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctwkz\" (UniqueName: \"kubernetes.io/projected/f31a7def-755f-49e8-bf97-7e155bcc5113-kube-api-access-ctwkz\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.764738 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.770282 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.875530 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.883687 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.072041 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-gcttq"] Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.074026 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-628lt" Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.094085 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6bddb"] Jan 30 21:36:32 crc kubenswrapper[4751]: W0130 21:36:32.107759 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode60cf673_3513_4af6_ac72_280908e95405.slice/crio-f2b60fe877dbe4646eb60fc38c2dcbb637f0aa9c85e2ee1bd4066b815cddcc50 WatchSource:0}: Error finding container f2b60fe877dbe4646eb60fc38c2dcbb637f0aa9c85e2ee1bd4066b815cddcc50: Status 404 returned error can't find the container with id f2b60fe877dbe4646eb60fc38c2dcbb637f0aa9c85e2ee1bd4066b815cddcc50 Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.163033 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgm5m\" (UniqueName: \"kubernetes.io/projected/33135688-6f3e-426e-be2b-0e455d6736e6-kube-api-access-fgm5m\") pod \"33135688-6f3e-426e-be2b-0e455d6736e6\" (UID: \"33135688-6f3e-426e-be2b-0e455d6736e6\") " Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.163379 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33135688-6f3e-426e-be2b-0e455d6736e6-dns-svc\") pod \"33135688-6f3e-426e-be2b-0e455d6736e6\" (UID: \"33135688-6f3e-426e-be2b-0e455d6736e6\") " Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.163525 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33135688-6f3e-426e-be2b-0e455d6736e6-config\") pod \"33135688-6f3e-426e-be2b-0e455d6736e6\" (UID: \"33135688-6f3e-426e-be2b-0e455d6736e6\") " Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.168126 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33135688-6f3e-426e-be2b-0e455d6736e6-kube-api-access-fgm5m" (OuterVolumeSpecName: "kube-api-access-fgm5m") pod "33135688-6f3e-426e-be2b-0e455d6736e6" (UID: "33135688-6f3e-426e-be2b-0e455d6736e6"). InnerVolumeSpecName "kube-api-access-fgm5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.200050 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-pl94b"] Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.201954 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33135688-6f3e-426e-be2b-0e455d6736e6-config" (OuterVolumeSpecName: "config") pod "33135688-6f3e-426e-be2b-0e455d6736e6" (UID: "33135688-6f3e-426e-be2b-0e455d6736e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.209205 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33135688-6f3e-426e-be2b-0e455d6736e6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "33135688-6f3e-426e-be2b-0e455d6736e6" (UID: "33135688-6f3e-426e-be2b-0e455d6736e6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:32 crc kubenswrapper[4751]: W0130 21:36:32.211053 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa6dba67_6d0e_4b49_b9dd_0905f6ffe809.slice/crio-5084059983d5e17529806552161f9ac2cf353d5b0f25b7a0a25c23ba8ae664c9 WatchSource:0}: Error finding container 5084059983d5e17529806552161f9ac2cf353d5b0f25b7a0a25c23ba8ae664c9: Status 404 returned error can't find the container with id 5084059983d5e17529806552161f9ac2cf353d5b0f25b7a0a25c23ba8ae664c9 Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.266814 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgm5m\" (UniqueName: \"kubernetes.io/projected/33135688-6f3e-426e-be2b-0e455d6736e6-kube-api-access-fgm5m\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.266844 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33135688-6f3e-426e-be2b-0e455d6736e6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.266853 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33135688-6f3e-426e-be2b-0e455d6736e6-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.293162 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6bddb" event={"ID":"e60cf673-3513-4af6-ac72-280908e95405","Type":"ContainerStarted","Data":"f2b60fe877dbe4646eb60fc38c2dcbb637f0aa9c85e2ee1bd4066b815cddcc50"} Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.295701 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" event={"ID":"6f39785c-2919-4c29-8405-fd314710c587","Type":"ContainerStarted","Data":"ceb05bc26ceb74ab33848c8dc2ae7e47ba1d5123056e72a37cc4e8a9b93ad605"} Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.295764 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" event={"ID":"6f39785c-2919-4c29-8405-fd314710c587","Type":"ContainerStarted","Data":"55b6cba15b1dde6f627ad66f8e7ca8bf6ccd049b3a97fa354a6cb717078364af"} Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.302117 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-whdw4" Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.302167 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-whdw4" event={"ID":"d3d45f11-44b0-4b38-b308-c99c83e52e6b","Type":"ContainerDied","Data":"3071dbc640f12657ce923f3e1023fb8d61a64a9e5353065a4040dc6a73df2531"} Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.304468 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-pl94b" event={"ID":"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809","Type":"ContainerStarted","Data":"5084059983d5e17529806552161f9ac2cf353d5b0f25b7a0a25c23ba8ae664c9"} Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.307148 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-628lt" event={"ID":"33135688-6f3e-426e-be2b-0e455d6736e6","Type":"ContainerDied","Data":"5b26c9f9622d5f37dabd6fb741797aade48188fd9c3b092f168e67b8d44a96db"} Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.307184 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-628lt" Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.307188 4751 scope.go:117] "RemoveContainer" containerID="9c47ecbe42a77dd1b41021da0eb6f61ea6d658a2d38393fa7a8f216c5d640c6d" Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.433830 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-whdw4"] Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.444644 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-whdw4"] Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.473609 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-628lt"] Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.483163 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-628lt"] Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.494896 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.504241 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.047835 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-1da7-account-create-update-q9cg8"] Jan 30 21:36:33 crc kubenswrapper[4751]: E0130 21:36:33.048728 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33135688-6f3e-426e-be2b-0e455d6736e6" containerName="init" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.048747 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="33135688-6f3e-426e-be2b-0e455d6736e6" containerName="init" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.049028 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="33135688-6f3e-426e-be2b-0e455d6736e6" containerName="init" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.050004 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1da7-account-create-update-q9cg8" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.053992 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.060146 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1da7-account-create-update-q9cg8"] Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.086358 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.086485 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.098175 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-hgg7b"] Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.100665 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-hgg7b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.115887 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-hgg7b"] Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.173033 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.195836 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hpll\" (UniqueName: \"kubernetes.io/projected/e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2-kube-api-access-9hpll\") pod \"keystone-db-create-hgg7b\" (UID: \"e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2\") " pod="openstack/keystone-db-create-hgg7b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.195923 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2-operator-scripts\") pod \"keystone-db-create-hgg7b\" (UID: \"e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2\") " pod="openstack/keystone-db-create-hgg7b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.195958 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj7db\" (UniqueName: \"kubernetes.io/projected/722402dd-bf51-47a6-b20e-85aec93527d9-kube-api-access-zj7db\") pod \"keystone-1da7-account-create-update-q9cg8\" (UID: \"722402dd-bf51-47a6-b20e-85aec93527d9\") " pod="openstack/keystone-1da7-account-create-update-q9cg8" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.195981 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/722402dd-bf51-47a6-b20e-85aec93527d9-operator-scripts\") pod \"keystone-1da7-account-create-update-q9cg8\" (UID: \"722402dd-bf51-47a6-b20e-85aec93527d9\") " pod="openstack/keystone-1da7-account-create-update-q9cg8" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.298115 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj7db\" (UniqueName: \"kubernetes.io/projected/722402dd-bf51-47a6-b20e-85aec93527d9-kube-api-access-zj7db\") pod \"keystone-1da7-account-create-update-q9cg8\" (UID: \"722402dd-bf51-47a6-b20e-85aec93527d9\") " pod="openstack/keystone-1da7-account-create-update-q9cg8" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.298163 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/722402dd-bf51-47a6-b20e-85aec93527d9-operator-scripts\") pod \"keystone-1da7-account-create-update-q9cg8\" (UID: \"722402dd-bf51-47a6-b20e-85aec93527d9\") " pod="openstack/keystone-1da7-account-create-update-q9cg8" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.299010 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/722402dd-bf51-47a6-b20e-85aec93527d9-operator-scripts\") pod \"keystone-1da7-account-create-update-q9cg8\" (UID: \"722402dd-bf51-47a6-b20e-85aec93527d9\") " pod="openstack/keystone-1da7-account-create-update-q9cg8" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.299172 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hpll\" (UniqueName: \"kubernetes.io/projected/e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2-kube-api-access-9hpll\") pod \"keystone-db-create-hgg7b\" (UID: \"e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2\") " pod="openstack/keystone-db-create-hgg7b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.299468 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2-operator-scripts\") pod \"keystone-db-create-hgg7b\" (UID: \"e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2\") " pod="openstack/keystone-db-create-hgg7b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.311058 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2-operator-scripts\") pod \"keystone-db-create-hgg7b\" (UID: \"e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2\") " pod="openstack/keystone-db-create-hgg7b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.319795 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj7db\" (UniqueName: \"kubernetes.io/projected/722402dd-bf51-47a6-b20e-85aec93527d9-kube-api-access-zj7db\") pod \"keystone-1da7-account-create-update-q9cg8\" (UID: \"722402dd-bf51-47a6-b20e-85aec93527d9\") " pod="openstack/keystone-1da7-account-create-update-q9cg8" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.328757 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hpll\" (UniqueName: \"kubernetes.io/projected/e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2-kube-api-access-9hpll\") pod \"keystone-db-create-hgg7b\" (UID: \"e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2\") " pod="openstack/keystone-db-create-hgg7b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.330636 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-mxcnd"] Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.331786 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mxcnd" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.373237 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1da7-account-create-update-q9cg8" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.374608 4751 generic.go:334] "Generic (PLEG): container finished" podID="aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" containerID="f5e405ed39cb57c7e634de9365462e74ee99a3051cc26eb21d0da11ce6b70e82" exitCode=0 Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.376041 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-mxcnd"] Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.376074 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-pl94b" event={"ID":"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809","Type":"ContainerDied","Data":"f5e405ed39cb57c7e634de9365462e74ee99a3051cc26eb21d0da11ce6b70e82"} Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.393222 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6bddb" event={"ID":"e60cf673-3513-4af6-ac72-280908e95405","Type":"ContainerStarted","Data":"dfa0cc5e0e1a00048e5540825072f7736fb9b3f30a105d5fc4fb8fda5077dfc3"} Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.399309 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f31a7def-755f-49e8-bf97-7e155bcc5113","Type":"ContainerStarted","Data":"83bf62a32f5d5bd7adb294af5aaaa53cab4f2669572bd0f972b3ecfa96d0be73"} Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.401486 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5-operator-scripts\") pod \"placement-db-create-mxcnd\" (UID: \"ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5\") " pod="openstack/placement-db-create-mxcnd" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.401678 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxklc\" (UniqueName: \"kubernetes.io/projected/ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5-kube-api-access-pxklc\") pod \"placement-db-create-mxcnd\" (UID: \"ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5\") " pod="openstack/placement-db-create-mxcnd" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.406182 4751 generic.go:334] "Generic (PLEG): container finished" podID="6f39785c-2919-4c29-8405-fd314710c587" containerID="ceb05bc26ceb74ab33848c8dc2ae7e47ba1d5123056e72a37cc4e8a9b93ad605" exitCode=0 Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.407987 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" event={"ID":"6f39785c-2919-4c29-8405-fd314710c587","Type":"ContainerDied","Data":"ceb05bc26ceb74ab33848c8dc2ae7e47ba1d5123056e72a37cc4e8a9b93ad605"} Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.418421 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-hgg7b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.466618 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.501951 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-fed1-account-create-update-ztdkt"] Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.503656 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxklc\" (UniqueName: \"kubernetes.io/projected/ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5-kube-api-access-pxklc\") pod \"placement-db-create-mxcnd\" (UID: \"ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5\") " pod="openstack/placement-db-create-mxcnd" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.503697 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fed1-account-create-update-ztdkt" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.503952 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5-operator-scripts\") pod \"placement-db-create-mxcnd\" (UID: \"ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5\") " pod="openstack/placement-db-create-mxcnd" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.507009 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5-operator-scripts\") pod \"placement-db-create-mxcnd\" (UID: \"ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5\") " pod="openstack/placement-db-create-mxcnd" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.510429 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.521173 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-fed1-account-create-update-ztdkt"] Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.535855 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxklc\" (UniqueName: \"kubernetes.io/projected/ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5-kube-api-access-pxklc\") pod \"placement-db-create-mxcnd\" (UID: \"ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5\") " pod="openstack/placement-db-create-mxcnd" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.557515 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.563407 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-6bddb" podStartSLOduration=3.563387487 podStartE2EDuration="3.563387487s" podCreationTimestamp="2026-01-30 21:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:33.409574673 +0000 UTC m=+1332.155397322" watchObservedRunningTime="2026-01-30 21:36:33.563387487 +0000 UTC m=+1332.309210136" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.605485 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1373e37-3653-4f5d-9978-9d1cca4e546b-operator-scripts\") pod \"placement-fed1-account-create-update-ztdkt\" (UID: \"d1373e37-3653-4f5d-9978-9d1cca4e546b\") " pod="openstack/placement-fed1-account-create-update-ztdkt" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.605559 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vdvs\" (UniqueName: \"kubernetes.io/projected/d1373e37-3653-4f5d-9978-9d1cca4e546b-kube-api-access-9vdvs\") pod \"placement-fed1-account-create-update-ztdkt\" (UID: \"d1373e37-3653-4f5d-9978-9d1cca4e546b\") " pod="openstack/placement-fed1-account-create-update-ztdkt" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.665207 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-7tt6b"] Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.666658 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7tt6b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.683418 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7tt6b"] Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.707337 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x92m\" (UniqueName: \"kubernetes.io/projected/93341dcd-a293-4879-8baf-855556383780-kube-api-access-9x92m\") pod \"glance-db-create-7tt6b\" (UID: \"93341dcd-a293-4879-8baf-855556383780\") " pod="openstack/glance-db-create-7tt6b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.707401 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93341dcd-a293-4879-8baf-855556383780-operator-scripts\") pod \"glance-db-create-7tt6b\" (UID: \"93341dcd-a293-4879-8baf-855556383780\") " pod="openstack/glance-db-create-7tt6b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.707484 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1373e37-3653-4f5d-9978-9d1cca4e546b-operator-scripts\") pod \"placement-fed1-account-create-update-ztdkt\" (UID: \"d1373e37-3653-4f5d-9978-9d1cca4e546b\") " pod="openstack/placement-fed1-account-create-update-ztdkt" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.707530 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vdvs\" (UniqueName: \"kubernetes.io/projected/d1373e37-3653-4f5d-9978-9d1cca4e546b-kube-api-access-9vdvs\") pod \"placement-fed1-account-create-update-ztdkt\" (UID: \"d1373e37-3653-4f5d-9978-9d1cca4e546b\") " pod="openstack/placement-fed1-account-create-update-ztdkt" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.708739 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1373e37-3653-4f5d-9978-9d1cca4e546b-operator-scripts\") pod \"placement-fed1-account-create-update-ztdkt\" (UID: \"d1373e37-3653-4f5d-9978-9d1cca4e546b\") " pod="openstack/placement-fed1-account-create-update-ztdkt" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.723072 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vdvs\" (UniqueName: \"kubernetes.io/projected/d1373e37-3653-4f5d-9978-9d1cca4e546b-kube-api-access-9vdvs\") pod \"placement-fed1-account-create-update-ztdkt\" (UID: \"d1373e37-3653-4f5d-9978-9d1cca4e546b\") " pod="openstack/placement-fed1-account-create-update-ztdkt" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.766351 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a004-account-create-update-zkpzg"] Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.767795 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a004-account-create-update-zkpzg" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.777796 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.779590 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a004-account-create-update-zkpzg"] Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.805952 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mxcnd" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.809608 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37f348fb-7f83-40db-98b2-7e8bc603a3e6-operator-scripts\") pod \"glance-a004-account-create-update-zkpzg\" (UID: \"37f348fb-7f83-40db-98b2-7e8bc603a3e6\") " pod="openstack/glance-a004-account-create-update-zkpzg" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.809726 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mphdl\" (UniqueName: \"kubernetes.io/projected/37f348fb-7f83-40db-98b2-7e8bc603a3e6-kube-api-access-mphdl\") pod \"glance-a004-account-create-update-zkpzg\" (UID: \"37f348fb-7f83-40db-98b2-7e8bc603a3e6\") " pod="openstack/glance-a004-account-create-update-zkpzg" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.809864 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x92m\" (UniqueName: \"kubernetes.io/projected/93341dcd-a293-4879-8baf-855556383780-kube-api-access-9x92m\") pod \"glance-db-create-7tt6b\" (UID: \"93341dcd-a293-4879-8baf-855556383780\") " pod="openstack/glance-db-create-7tt6b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.809955 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93341dcd-a293-4879-8baf-855556383780-operator-scripts\") pod \"glance-db-create-7tt6b\" (UID: \"93341dcd-a293-4879-8baf-855556383780\") " pod="openstack/glance-db-create-7tt6b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.810867 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93341dcd-a293-4879-8baf-855556383780-operator-scripts\") pod \"glance-db-create-7tt6b\" (UID: \"93341dcd-a293-4879-8baf-855556383780\") " pod="openstack/glance-db-create-7tt6b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.827855 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x92m\" (UniqueName: \"kubernetes.io/projected/93341dcd-a293-4879-8baf-855556383780-kube-api-access-9x92m\") pod \"glance-db-create-7tt6b\" (UID: \"93341dcd-a293-4879-8baf-855556383780\") " pod="openstack/glance-db-create-7tt6b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.892379 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fed1-account-create-update-ztdkt" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.911501 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37f348fb-7f83-40db-98b2-7e8bc603a3e6-operator-scripts\") pod \"glance-a004-account-create-update-zkpzg\" (UID: \"37f348fb-7f83-40db-98b2-7e8bc603a3e6\") " pod="openstack/glance-a004-account-create-update-zkpzg" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.911550 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mphdl\" (UniqueName: \"kubernetes.io/projected/37f348fb-7f83-40db-98b2-7e8bc603a3e6-kube-api-access-mphdl\") pod \"glance-a004-account-create-update-zkpzg\" (UID: \"37f348fb-7f83-40db-98b2-7e8bc603a3e6\") " pod="openstack/glance-a004-account-create-update-zkpzg" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.912316 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37f348fb-7f83-40db-98b2-7e8bc603a3e6-operator-scripts\") pod \"glance-a004-account-create-update-zkpzg\" (UID: \"37f348fb-7f83-40db-98b2-7e8bc603a3e6\") " pod="openstack/glance-a004-account-create-update-zkpzg" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.925753 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mphdl\" (UniqueName: \"kubernetes.io/projected/37f348fb-7f83-40db-98b2-7e8bc603a3e6-kube-api-access-mphdl\") pod \"glance-a004-account-create-update-zkpzg\" (UID: \"37f348fb-7f83-40db-98b2-7e8bc603a3e6\") " pod="openstack/glance-a004-account-create-update-zkpzg" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.985532 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7tt6b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.985651 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33135688-6f3e-426e-be2b-0e455d6736e6" path="/var/lib/kubelet/pods/33135688-6f3e-426e-be2b-0e455d6736e6/volumes" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.986180 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3d45f11-44b0-4b38-b308-c99c83e52e6b" path="/var/lib/kubelet/pods/d3d45f11-44b0-4b38-b308-c99c83e52e6b/volumes" Jan 30 21:36:34 crc kubenswrapper[4751]: I0130 21:36:34.101353 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a004-account-create-update-zkpzg" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.441921 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-p9lfn"] Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.443570 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.480309 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-p9lfn"] Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.560463 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phkvp\" (UniqueName: \"kubernetes.io/projected/4fbc6a33-d240-4982-ade1-668f5da8b516-kube-api-access-phkvp\") pod \"mysqld-exporter-openstack-db-create-p9lfn\" (UID: \"4fbc6a33-d240-4982-ade1-668f5da8b516\") " pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.560614 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fbc6a33-d240-4982-ade1-668f5da8b516-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-p9lfn\" (UID: \"4fbc6a33-d240-4982-ade1-668f5da8b516\") " pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.577690 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-gcttq"] Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.618657 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4dbml"] Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.620600 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.649590 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.649647 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4dbml"] Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.662826 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phkvp\" (UniqueName: \"kubernetes.io/projected/4fbc6a33-d240-4982-ade1-668f5da8b516-kube-api-access-phkvp\") pod \"mysqld-exporter-openstack-db-create-p9lfn\" (UID: \"4fbc6a33-d240-4982-ade1-668f5da8b516\") " pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.662922 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fbc6a33-d240-4982-ade1-668f5da8b516-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-p9lfn\" (UID: \"4fbc6a33-d240-4982-ade1-668f5da8b516\") " pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.664312 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fbc6a33-d240-4982-ade1-668f5da8b516-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-p9lfn\" (UID: \"4fbc6a33-d240-4982-ade1-668f5da8b516\") " pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.695465 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-dd31-account-create-update-4hlqb"] Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.696898 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.701465 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.709913 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phkvp\" (UniqueName: \"kubernetes.io/projected/4fbc6a33-d240-4982-ade1-668f5da8b516-kube-api-access-phkvp\") pod \"mysqld-exporter-openstack-db-create-p9lfn\" (UID: \"4fbc6a33-d240-4982-ade1-668f5da8b516\") " pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.766462 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-config\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.766546 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58ct9\" (UniqueName: \"kubernetes.io/projected/eb683b6d-9110-46e1-8406-eea86d9cc73b-kube-api-access-58ct9\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.766596 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.766642 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.766690 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.775394 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-dd31-account-create-update-4hlqb"] Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.787583 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.870129 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.870230 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55099194-6cb2-437d-ae0d-a08c104de380-operator-scripts\") pod \"mysqld-exporter-dd31-account-create-update-4hlqb\" (UID: \"55099194-6cb2-437d-ae0d-a08c104de380\") " pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.870293 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.870492 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-config\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.870538 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t95l\" (UniqueName: \"kubernetes.io/projected/55099194-6cb2-437d-ae0d-a08c104de380-kube-api-access-6t95l\") pod \"mysqld-exporter-dd31-account-create-update-4hlqb\" (UID: \"55099194-6cb2-437d-ae0d-a08c104de380\") " pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.870608 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58ct9\" (UniqueName: \"kubernetes.io/projected/eb683b6d-9110-46e1-8406-eea86d9cc73b-kube-api-access-58ct9\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.870722 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.871357 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.871747 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.871754 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-config\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.872134 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.910128 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58ct9\" (UniqueName: \"kubernetes.io/projected/eb683b6d-9110-46e1-8406-eea86d9cc73b-kube-api-access-58ct9\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.971783 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.979364 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t95l\" (UniqueName: \"kubernetes.io/projected/55099194-6cb2-437d-ae0d-a08c104de380-kube-api-access-6t95l\") pod \"mysqld-exporter-dd31-account-create-update-4hlqb\" (UID: \"55099194-6cb2-437d-ae0d-a08c104de380\") " pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.979571 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55099194-6cb2-437d-ae0d-a08c104de380-operator-scripts\") pod \"mysqld-exporter-dd31-account-create-update-4hlqb\" (UID: \"55099194-6cb2-437d-ae0d-a08c104de380\") " pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.980274 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55099194-6cb2-437d-ae0d-a08c104de380-operator-scripts\") pod \"mysqld-exporter-dd31-account-create-update-4hlqb\" (UID: \"55099194-6cb2-437d-ae0d-a08c104de380\") " pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.998912 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t95l\" (UniqueName: \"kubernetes.io/projected/55099194-6cb2-437d-ae0d-a08c104de380-kube-api-access-6t95l\") pod \"mysqld-exporter-dd31-account-create-update-4hlqb\" (UID: \"55099194-6cb2-437d-ae0d-a08c104de380\") " pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.084007 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.647121 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.653666 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.655864 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.655877 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.656169 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-2gfcw" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.656635 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.687787 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.735925 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-nkwvf"] Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.742191 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.747640 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.747847 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.748070 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.770526 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-nkwvf"] Jan 30 21:36:36 crc kubenswrapper[4751]: E0130 21:36:36.771298 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-4mznf ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-4mznf ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-nkwvf" podUID="91b9a8dc-b59e-4e4c-832b-494faad41261" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.779451 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-vvq25"] Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.780801 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.795647 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.795756 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-cache\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.795798 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q9dw\" (UniqueName: \"kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-kube-api-access-2q9dw\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.795882 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-15aa0dd8-3a54-4cb5-aa28-c1ef970c7d80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15aa0dd8-3a54-4cb5-aa28-c1ef970c7d80\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.795912 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.795971 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-lock\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.802687 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-vvq25"] Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.831386 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-nkwvf"] Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.898461 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.898512 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-swiftconf\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.898546 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlmhc\" (UniqueName: \"kubernetes.io/projected/70af95fb-5ca8-4482-a1bc-81b1891e0da7-kube-api-access-mlmhc\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.898569 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-swiftconf\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.898598 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91b9a8dc-b59e-4e4c-832b-494faad41261-scripts\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:36 crc kubenswrapper[4751]: E0130 21:36:36.898654 4751 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 21:36:36 crc kubenswrapper[4751]: E0130 21:36:36.898677 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 21:36:36 crc kubenswrapper[4751]: E0130 21:36:36.898722 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift podName:4f6a1442-f7f7-499a-a7d5-c354d76ba9d5 nodeName:}" failed. No retries permitted until 2026-01-30 21:36:37.398704996 +0000 UTC m=+1336.144527645 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift") pod "swift-storage-0" (UID: "4f6a1442-f7f7-499a-a7d5-c354d76ba9d5") : configmap "swift-ring-files" not found Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.898748 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mznf\" (UniqueName: \"kubernetes.io/projected/91b9a8dc-b59e-4e4c-832b-494faad41261-kube-api-access-4mznf\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.898918 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-cache\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.898972 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-dispersionconf\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899027 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q9dw\" (UniqueName: \"kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-kube-api-access-2q9dw\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899167 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-combined-ca-bundle\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899236 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-15aa0dd8-3a54-4cb5-aa28-c1ef970c7d80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15aa0dd8-3a54-4cb5-aa28-c1ef970c7d80\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899272 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899383 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/91b9a8dc-b59e-4e4c-832b-494faad41261-etc-swift\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899449 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70af95fb-5ca8-4482-a1bc-81b1891e0da7-scripts\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899504 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-cache\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899530 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-lock\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899569 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/70af95fb-5ca8-4482-a1bc-81b1891e0da7-etc-swift\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899615 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-combined-ca-bundle\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899732 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/91b9a8dc-b59e-4e4c-832b-494faad41261-ring-data-devices\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899766 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-dispersionconf\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899791 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/70af95fb-5ca8-4482-a1bc-81b1891e0da7-ring-data-devices\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899805 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-lock\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.905675 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.905703 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-15aa0dd8-3a54-4cb5-aa28-c1ef970c7d80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15aa0dd8-3a54-4cb5-aa28-c1ef970c7d80\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8ae021dec1593a0996c8ad3e7a0be16c58e24389d91041271a46023fface37c6/globalmount\"" pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.907754 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.917522 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q9dw\" (UniqueName: \"kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-kube-api-access-2q9dw\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.960847 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-15aa0dd8-3a54-4cb5-aa28-c1ef970c7d80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15aa0dd8-3a54-4cb5-aa28-c1ef970c7d80\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001300 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-swiftconf\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001363 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlmhc\" (UniqueName: \"kubernetes.io/projected/70af95fb-5ca8-4482-a1bc-81b1891e0da7-kube-api-access-mlmhc\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001383 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-swiftconf\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001414 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91b9a8dc-b59e-4e4c-832b-494faad41261-scripts\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001450 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mznf\" (UniqueName: \"kubernetes.io/projected/91b9a8dc-b59e-4e4c-832b-494faad41261-kube-api-access-4mznf\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001486 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-dispersionconf\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001543 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-combined-ca-bundle\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001576 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/91b9a8dc-b59e-4e4c-832b-494faad41261-etc-swift\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001601 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70af95fb-5ca8-4482-a1bc-81b1891e0da7-scripts\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001634 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/70af95fb-5ca8-4482-a1bc-81b1891e0da7-etc-swift\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001658 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-combined-ca-bundle\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001692 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/91b9a8dc-b59e-4e4c-832b-494faad41261-ring-data-devices\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001709 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-dispersionconf\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001728 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/70af95fb-5ca8-4482-a1bc-81b1891e0da7-ring-data-devices\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.002569 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/70af95fb-5ca8-4482-a1bc-81b1891e0da7-ring-data-devices\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.002949 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/91b9a8dc-b59e-4e4c-832b-494faad41261-etc-swift\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.003477 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70af95fb-5ca8-4482-a1bc-81b1891e0da7-scripts\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.003729 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/70af95fb-5ca8-4482-a1bc-81b1891e0da7-etc-swift\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.005011 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/91b9a8dc-b59e-4e4c-832b-494faad41261-ring-data-devices\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.005344 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91b9a8dc-b59e-4e4c-832b-494faad41261-scripts\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.007086 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-dispersionconf\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.007414 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-combined-ca-bundle\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.007494 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-combined-ca-bundle\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.007564 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-dispersionconf\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.008127 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-swiftconf\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.010655 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-swiftconf\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.019519 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mznf\" (UniqueName: \"kubernetes.io/projected/91b9a8dc-b59e-4e4c-832b-494faad41261-kube-api-access-4mznf\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.027542 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlmhc\" (UniqueName: \"kubernetes.io/projected/70af95fb-5ca8-4482-a1bc-81b1891e0da7-kube-api-access-mlmhc\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.103147 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.409678 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:37 crc kubenswrapper[4751]: E0130 21:36:37.409902 4751 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 21:36:37 crc kubenswrapper[4751]: E0130 21:36:37.409937 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 21:36:37 crc kubenswrapper[4751]: E0130 21:36:37.410007 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift podName:4f6a1442-f7f7-499a-a7d5-c354d76ba9d5 nodeName:}" failed. No retries permitted until 2026-01-30 21:36:38.409984721 +0000 UTC m=+1337.155807380 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift") pod "swift-storage-0" (UID: "4f6a1442-f7f7-499a-a7d5-c354d76ba9d5") : configmap "swift-ring-files" not found Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.443694 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.466613 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.613521 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-combined-ca-bundle\") pod \"91b9a8dc-b59e-4e4c-832b-494faad41261\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.613839 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-swiftconf\") pod \"91b9a8dc-b59e-4e4c-832b-494faad41261\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.613960 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/91b9a8dc-b59e-4e4c-832b-494faad41261-ring-data-devices\") pod \"91b9a8dc-b59e-4e4c-832b-494faad41261\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.614255 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-dispersionconf\") pod \"91b9a8dc-b59e-4e4c-832b-494faad41261\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.614701 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91b9a8dc-b59e-4e4c-832b-494faad41261-scripts\") pod \"91b9a8dc-b59e-4e4c-832b-494faad41261\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.615436 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mznf\" (UniqueName: \"kubernetes.io/projected/91b9a8dc-b59e-4e4c-832b-494faad41261-kube-api-access-4mznf\") pod \"91b9a8dc-b59e-4e4c-832b-494faad41261\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.615929 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/91b9a8dc-b59e-4e4c-832b-494faad41261-etc-swift\") pod \"91b9a8dc-b59e-4e4c-832b-494faad41261\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.614340 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91b9a8dc-b59e-4e4c-832b-494faad41261-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "91b9a8dc-b59e-4e4c-832b-494faad41261" (UID: "91b9a8dc-b59e-4e4c-832b-494faad41261"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.615370 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91b9a8dc-b59e-4e4c-832b-494faad41261-scripts" (OuterVolumeSpecName: "scripts") pod "91b9a8dc-b59e-4e4c-832b-494faad41261" (UID: "91b9a8dc-b59e-4e4c-832b-494faad41261"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.616473 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91b9a8dc-b59e-4e4c-832b-494faad41261-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "91b9a8dc-b59e-4e4c-832b-494faad41261" (UID: "91b9a8dc-b59e-4e4c-832b-494faad41261"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.617471 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91b9a8dc-b59e-4e4c-832b-494faad41261-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.617574 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "91b9a8dc-b59e-4e4c-832b-494faad41261" (UID: "91b9a8dc-b59e-4e4c-832b-494faad41261"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.617585 4751 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/91b9a8dc-b59e-4e4c-832b-494faad41261-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.617628 4751 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/91b9a8dc-b59e-4e4c-832b-494faad41261-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.617634 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91b9a8dc-b59e-4e4c-832b-494faad41261" (UID: "91b9a8dc-b59e-4e4c-832b-494faad41261"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.619880 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "91b9a8dc-b59e-4e4c-832b-494faad41261" (UID: "91b9a8dc-b59e-4e4c-832b-494faad41261"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.627480 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91b9a8dc-b59e-4e4c-832b-494faad41261-kube-api-access-4mznf" (OuterVolumeSpecName: "kube-api-access-4mznf") pod "91b9a8dc-b59e-4e4c-832b-494faad41261" (UID: "91b9a8dc-b59e-4e4c-832b-494faad41261"). InnerVolumeSpecName "kube-api-access-4mznf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.719458 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.719492 4751 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.719501 4751 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.719509 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mznf\" (UniqueName: \"kubernetes.io/projected/91b9a8dc-b59e-4e4c-832b-494faad41261-kube-api-access-4mznf\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:38 crc kubenswrapper[4751]: I0130 21:36:38.434823 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:38 crc kubenswrapper[4751]: E0130 21:36:38.435253 4751 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 21:36:38 crc kubenswrapper[4751]: E0130 21:36:38.435409 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 21:36:38 crc kubenswrapper[4751]: E0130 21:36:38.435462 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift podName:4f6a1442-f7f7-499a-a7d5-c354d76ba9d5 nodeName:}" failed. No retries permitted until 2026-01-30 21:36:40.435445457 +0000 UTC m=+1339.181268116 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift") pod "swift-storage-0" (UID: "4f6a1442-f7f7-499a-a7d5-c354d76ba9d5") : configmap "swift-ring-files" not found Jan 30 21:36:38 crc kubenswrapper[4751]: I0130 21:36:38.453145 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:38 crc kubenswrapper[4751]: I0130 21:36:38.501127 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-nkwvf"] Jan 30 21:36:38 crc kubenswrapper[4751]: I0130 21:36:38.509737 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-nkwvf"] Jan 30 21:36:39 crc kubenswrapper[4751]: I0130 21:36:39.466005 4751 generic.go:334] "Generic (PLEG): container finished" podID="d56430b1-227c-4074-8d43-86953ab9f911" containerID="0e8380c6ff95a924287e8674599018ad6d281082245c17624c192e7eea73966f" exitCode=0 Jan 30 21:36:39 crc kubenswrapper[4751]: I0130 21:36:39.466122 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d56430b1-227c-4074-8d43-86953ab9f911","Type":"ContainerDied","Data":"0e8380c6ff95a924287e8674599018ad6d281082245c17624c192e7eea73966f"} Jan 30 21:36:39 crc kubenswrapper[4751]: I0130 21:36:39.965080 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-mxcnd"] Jan 30 21:36:39 crc kubenswrapper[4751]: I0130 21:36:39.997948 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91b9a8dc-b59e-4e4c-832b-494faad41261" path="/var/lib/kubelet/pods/91b9a8dc-b59e-4e4c-832b-494faad41261/volumes" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.367632 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-8xlxv"] Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.373724 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8xlxv" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.375731 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.387504 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8xlxv"] Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.478945 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-pl94b" event={"ID":"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809","Type":"ContainerStarted","Data":"6cd06f2bb56b148e8bf2fd2524c5d527d970ea6c6b7ba394cc56edcda374faf1"} Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.480381 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.487150 4751 generic.go:334] "Generic (PLEG): container finished" podID="ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5" containerID="3d408c3254750e92d426d8cded49880995124a210d5a1b2ed7f46112cc91e938" exitCode=0 Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.487270 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-mxcnd" event={"ID":"ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5","Type":"ContainerDied","Data":"3d408c3254750e92d426d8cded49880995124a210d5a1b2ed7f46112cc91e938"} Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.487291 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-mxcnd" event={"ID":"ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5","Type":"ContainerStarted","Data":"80ee5941bdfee40d36adda8b22fe45b35867b32b4e232e517bcd95751ece2d05"} Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.489070 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f31a7def-755f-49e8-bf97-7e155bcc5113","Type":"ContainerStarted","Data":"04a156d706153c93639a688901d350f0328c9c9d4da2ad561e9e390f6d44d74b"} Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.489170 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f31a7def-755f-49e8-bf97-7e155bcc5113","Type":"ContainerStarted","Data":"cae3385f7099b9769debf5ffa6a014862e4a10fa087c1d2c65a218585f72a8f6"} Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.491187 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.495117 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.495204 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glgbl\" (UniqueName: \"kubernetes.io/projected/7be55860-0016-49cf-9505-9692dd9ccd36-kube-api-access-glgbl\") pod \"root-account-create-update-8xlxv\" (UID: \"7be55860-0016-49cf-9505-9692dd9ccd36\") " pod="openstack/root-account-create-update-8xlxv" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.495372 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7be55860-0016-49cf-9505-9692dd9ccd36-operator-scripts\") pod \"root-account-create-update-8xlxv\" (UID: \"7be55860-0016-49cf-9505-9692dd9ccd36\") " pod="openstack/root-account-create-update-8xlxv" Jan 30 21:36:40 crc kubenswrapper[4751]: E0130 21:36:40.495586 4751 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 21:36:40 crc kubenswrapper[4751]: E0130 21:36:40.495612 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 21:36:40 crc kubenswrapper[4751]: E0130 21:36:40.495673 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift podName:4f6a1442-f7f7-499a-a7d5-c354d76ba9d5 nodeName:}" failed. No retries permitted until 2026-01-30 21:36:44.495659077 +0000 UTC m=+1343.241481726 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift") pod "swift-storage-0" (UID: "4f6a1442-f7f7-499a-a7d5-c354d76ba9d5") : configmap "swift-ring-files" not found Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.497524 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" event={"ID":"6f39785c-2919-4c29-8405-fd314710c587","Type":"ContainerStarted","Data":"b6b3c4d303d05b0a0f708157ec8513f5dfe4b966ad2d29989c2873f44e7cbabd"} Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.497666 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" podUID="6f39785c-2919-4c29-8405-fd314710c587" containerName="dnsmasq-dns" containerID="cri-o://b6b3c4d303d05b0a0f708157ec8513f5dfe4b966ad2d29989c2873f44e7cbabd" gracePeriod=10 Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.497760 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.526078 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-pl94b" podStartSLOduration=10.526058919 podStartE2EDuration="10.526058919s" podCreationTimestamp="2026-01-30 21:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:40.501526694 +0000 UTC m=+1339.247349343" watchObservedRunningTime="2026-01-30 21:36:40.526058919 +0000 UTC m=+1339.271881568" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.558071 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.42751228 podStartE2EDuration="9.558048342s" podCreationTimestamp="2026-01-30 21:36:31 +0000 UTC" firstStartedPulling="2026-01-30 21:36:32.473099721 +0000 UTC m=+1331.218922370" lastFinishedPulling="2026-01-30 21:36:39.603635773 +0000 UTC m=+1338.349458432" observedRunningTime="2026-01-30 21:36:40.529400328 +0000 UTC m=+1339.275222977" watchObservedRunningTime="2026-01-30 21:36:40.558048342 +0000 UTC m=+1339.303870991" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.570554 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" podStartSLOduration=10.570536736 podStartE2EDuration="10.570536736s" podCreationTimestamp="2026-01-30 21:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:40.547354428 +0000 UTC m=+1339.293177077" watchObservedRunningTime="2026-01-30 21:36:40.570536736 +0000 UTC m=+1339.316359385" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.596557 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7be55860-0016-49cf-9505-9692dd9ccd36-operator-scripts\") pod \"root-account-create-update-8xlxv\" (UID: \"7be55860-0016-49cf-9505-9692dd9ccd36\") " pod="openstack/root-account-create-update-8xlxv" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.596684 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glgbl\" (UniqueName: \"kubernetes.io/projected/7be55860-0016-49cf-9505-9692dd9ccd36-kube-api-access-glgbl\") pod \"root-account-create-update-8xlxv\" (UID: \"7be55860-0016-49cf-9505-9692dd9ccd36\") " pod="openstack/root-account-create-update-8xlxv" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.598118 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7be55860-0016-49cf-9505-9692dd9ccd36-operator-scripts\") pod \"root-account-create-update-8xlxv\" (UID: \"7be55860-0016-49cf-9505-9692dd9ccd36\") " pod="openstack/root-account-create-update-8xlxv" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.620049 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glgbl\" (UniqueName: \"kubernetes.io/projected/7be55860-0016-49cf-9505-9692dd9ccd36-kube-api-access-glgbl\") pod \"root-account-create-update-8xlxv\" (UID: \"7be55860-0016-49cf-9505-9692dd9ccd36\") " pod="openstack/root-account-create-update-8xlxv" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.745619 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1da7-account-create-update-q9cg8"] Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.756422 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-p9lfn"] Jan 30 21:36:40 crc kubenswrapper[4751]: W0130 21:36:40.762835 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fbc6a33_d240_4982_ade1_668f5da8b516.slice/crio-697aedf9ffbdda8a06dbe5ac5680879f0f4a2aad04f2d5ce719596367a25a035 WatchSource:0}: Error finding container 697aedf9ffbdda8a06dbe5ac5680879f0f4a2aad04f2d5ce719596367a25a035: Status 404 returned error can't find the container with id 697aedf9ffbdda8a06dbe5ac5680879f0f4a2aad04f2d5ce719596367a25a035 Jan 30 21:36:40 crc kubenswrapper[4751]: W0130 21:36:40.763419 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod722402dd_bf51_47a6_b20e_85aec93527d9.slice/crio-e12b2d3c1c414c985bea152833a4e7437f3747bfd05adf3b5739d215a1fadf48 WatchSource:0}: Error finding container e12b2d3c1c414c985bea152833a4e7437f3747bfd05adf3b5739d215a1fadf48: Status 404 returned error can't find the container with id e12b2d3c1c414c985bea152833a4e7437f3747bfd05adf3b5739d215a1fadf48 Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.768448 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4dbml"] Jan 30 21:36:40 crc kubenswrapper[4751]: W0130 21:36:40.776961 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37f348fb_7f83_40db_98b2_7e8bc603a3e6.slice/crio-df2d8a98804dd8716d1b278672e379bd608478eb951fa38ce8e97562ba876f31 WatchSource:0}: Error finding container df2d8a98804dd8716d1b278672e379bd608478eb951fa38ce8e97562ba876f31: Status 404 returned error can't find the container with id df2d8a98804dd8716d1b278672e379bd608478eb951fa38ce8e97562ba876f31 Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.783360 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-vvq25"] Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.798347 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a004-account-create-update-zkpzg"] Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.807654 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-dd31-account-create-update-4hlqb"] Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.822652 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7tt6b"] Jan 30 21:36:40 crc kubenswrapper[4751]: W0130 21:36:40.824701 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70af95fb_5ca8_4482_a1bc_81b1891e0da7.slice/crio-68907368515e04efc423e96f6ad0f34c1d76a72cb81074b939310269d488cbe8 WatchSource:0}: Error finding container 68907368515e04efc423e96f6ad0f34c1d76a72cb81074b939310269d488cbe8: Status 404 returned error can't find the container with id 68907368515e04efc423e96f6ad0f34c1d76a72cb81074b939310269d488cbe8 Jan 30 21:36:40 crc kubenswrapper[4751]: W0130 21:36:40.837497 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93341dcd_a293_4879_8baf_855556383780.slice/crio-d2cd1c9c8696f97faaf38f91732e913b1ec957c1fea1e4ffc25d93f7701a0b4e WatchSource:0}: Error finding container d2cd1c9c8696f97faaf38f91732e913b1ec957c1fea1e4ffc25d93f7701a0b4e: Status 404 returned error can't find the container with id d2cd1c9c8696f97faaf38f91732e913b1ec957c1fea1e4ffc25d93f7701a0b4e Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.866707 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-hgg7b"] Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.878583 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-fed1-account-create-update-ztdkt"] Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.896451 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8xlxv" Jan 30 21:36:40 crc kubenswrapper[4751]: W0130 21:36:40.903458 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1373e37_3653_4f5d_9978_9d1cca4e546b.slice/crio-4ada26d1b2093244b4841da5d56a5cb27cf06118eae54dce88a395277f8ba995 WatchSource:0}: Error finding container 4ada26d1b2093244b4841da5d56a5cb27cf06118eae54dce88a395277f8ba995: Status 404 returned error can't find the container with id 4ada26d1b2093244b4841da5d56a5cb27cf06118eae54dce88a395277f8ba995 Jan 30 21:36:40 crc kubenswrapper[4751]: W0130 21:36:40.906948 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3a5d5df_fe19_49f0_b82a_afbe70b4c9f2.slice/crio-c773f4baaf7dd922c65d6ad7b794d9c6a5b5a5d6d85daf6f5c9bf7f785bbaedf WatchSource:0}: Error finding container c773f4baaf7dd922c65d6ad7b794d9c6a5b5a5d6d85daf6f5c9bf7f785bbaedf: Status 404 returned error can't find the container with id c773f4baaf7dd922c65d6ad7b794d9c6a5b5a5d6d85daf6f5c9bf7f785bbaedf Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.157405 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.309317 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-dns-svc\") pod \"6f39785c-2919-4c29-8405-fd314710c587\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.309740 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr2tf\" (UniqueName: \"kubernetes.io/projected/6f39785c-2919-4c29-8405-fd314710c587-kube-api-access-kr2tf\") pod \"6f39785c-2919-4c29-8405-fd314710c587\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.309778 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-ovsdbserver-nb\") pod \"6f39785c-2919-4c29-8405-fd314710c587\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.309884 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-config\") pod \"6f39785c-2919-4c29-8405-fd314710c587\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.323566 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f39785c-2919-4c29-8405-fd314710c587-kube-api-access-kr2tf" (OuterVolumeSpecName: "kube-api-access-kr2tf") pod "6f39785c-2919-4c29-8405-fd314710c587" (UID: "6f39785c-2919-4c29-8405-fd314710c587"). InnerVolumeSpecName "kube-api-access-kr2tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.414807 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr2tf\" (UniqueName: \"kubernetes.io/projected/6f39785c-2919-4c29-8405-fd314710c587-kube-api-access-kr2tf\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.544313 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8xlxv"] Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.563491 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6f39785c-2919-4c29-8405-fd314710c587" (UID: "6f39785c-2919-4c29-8405-fd314710c587"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.580984 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6f39785c-2919-4c29-8405-fd314710c587" (UID: "6f39785c-2919-4c29-8405-fd314710c587"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.603447 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1da7-account-create-update-q9cg8" event={"ID":"722402dd-bf51-47a6-b20e-85aec93527d9","Type":"ContainerStarted","Data":"d6a6c7d319a747790016da0d8bf07d4ab98c3d010eb7ce4cdc966d03c722da28"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.603495 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1da7-account-create-update-q9cg8" event={"ID":"722402dd-bf51-47a6-b20e-85aec93527d9","Type":"ContainerStarted","Data":"e12b2d3c1c414c985bea152833a4e7437f3747bfd05adf3b5739d215a1fadf48"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.632356 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.632377 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.633466 4751 generic.go:334] "Generic (PLEG): container finished" podID="6f39785c-2919-4c29-8405-fd314710c587" containerID="b6b3c4d303d05b0a0f708157ec8513f5dfe4b966ad2d29989c2873f44e7cbabd" exitCode=0 Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.633631 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.633647 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" event={"ID":"6f39785c-2919-4c29-8405-fd314710c587","Type":"ContainerDied","Data":"b6b3c4d303d05b0a0f708157ec8513f5dfe4b966ad2d29989c2873f44e7cbabd"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.634679 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" event={"ID":"6f39785c-2919-4c29-8405-fd314710c587","Type":"ContainerDied","Data":"55b6cba15b1dde6f627ad66f8e7ca8bf6ccd049b3a97fa354a6cb717078364af"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.634724 4751 scope.go:117] "RemoveContainer" containerID="b6b3c4d303d05b0a0f708157ec8513f5dfe4b966ad2d29989c2873f44e7cbabd" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.644585 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-1da7-account-create-update-q9cg8" podStartSLOduration=8.644553228 podStartE2EDuration="8.644553228s" podCreationTimestamp="2026-01-30 21:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:41.632935468 +0000 UTC m=+1340.378758117" watchObservedRunningTime="2026-01-30 21:36:41.644553228 +0000 UTC m=+1340.390375877" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.646603 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a004-account-create-update-zkpzg" event={"ID":"37f348fb-7f83-40db-98b2-7e8bc603a3e6","Type":"ContainerStarted","Data":"1b5908039e6b19df93f09b06f432ee6033fa0e6a44029f167f2bd610adfb389f"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.646647 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a004-account-create-update-zkpzg" event={"ID":"37f348fb-7f83-40db-98b2-7e8bc603a3e6","Type":"ContainerStarted","Data":"df2d8a98804dd8716d1b278672e379bd608478eb951fa38ce8e97562ba876f31"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.662988 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" event={"ID":"55099194-6cb2-437d-ae0d-a08c104de380","Type":"ContainerStarted","Data":"73515e94e6f7a825d6c9ac37458f6d6de21c87de5edaea4b69d38594e2145bf0"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.663050 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" event={"ID":"55099194-6cb2-437d-ae0d-a08c104de380","Type":"ContainerStarted","Data":"ac159ae2cb6976ef8122c35a06fe61ae9b29a654dcff59cb32ef375cbdebcd34"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.669121 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-a004-account-create-update-zkpzg" podStartSLOduration=8.669104333 podStartE2EDuration="8.669104333s" podCreationTimestamp="2026-01-30 21:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:41.663223756 +0000 UTC m=+1340.409046405" watchObservedRunningTime="2026-01-30 21:36:41.669104333 +0000 UTC m=+1340.414926982" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.672433 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-hgg7b" event={"ID":"e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2","Type":"ContainerStarted","Data":"c773f4baaf7dd922c65d6ad7b794d9c6a5b5a5d6d85daf6f5c9bf7f785bbaedf"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.686719 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" podStartSLOduration=6.686703312 podStartE2EDuration="6.686703312s" podCreationTimestamp="2026-01-30 21:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:41.683095346 +0000 UTC m=+1340.428917995" watchObservedRunningTime="2026-01-30 21:36:41.686703312 +0000 UTC m=+1340.432525961" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.697139 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fed1-account-create-update-ztdkt" event={"ID":"d1373e37-3653-4f5d-9978-9d1cca4e546b","Type":"ContainerStarted","Data":"327aabac3be4ee9fde091b36b1b374aaf9d59f04f57b4504442450704eca0e64"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.697247 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fed1-account-create-update-ztdkt" event={"ID":"d1373e37-3653-4f5d-9978-9d1cca4e546b","Type":"ContainerStarted","Data":"4ada26d1b2093244b4841da5d56a5cb27cf06118eae54dce88a395277f8ba995"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.706923 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7tt6b" event={"ID":"93341dcd-a293-4879-8baf-855556383780","Type":"ContainerStarted","Data":"c18d43f25fad540cc4b6980ee198b0b5113db4829b6825bf308264ef91e01601"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.706987 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7tt6b" event={"ID":"93341dcd-a293-4879-8baf-855556383780","Type":"ContainerStarted","Data":"d2cd1c9c8696f97faaf38f91732e913b1ec957c1fea1e4ffc25d93f7701a0b4e"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.707589 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-config" (OuterVolumeSpecName: "config") pod "6f39785c-2919-4c29-8405-fd314710c587" (UID: "6f39785c-2919-4c29-8405-fd314710c587"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.712673 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-hgg7b" podStartSLOduration=8.712652105 podStartE2EDuration="8.712652105s" podCreationTimestamp="2026-01-30 21:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:41.707455597 +0000 UTC m=+1340.453278246" watchObservedRunningTime="2026-01-30 21:36:41.712652105 +0000 UTC m=+1340.458474744" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.718838 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vvq25" event={"ID":"70af95fb-5ca8-4482-a1bc-81b1891e0da7","Type":"ContainerStarted","Data":"68907368515e04efc423e96f6ad0f34c1d76a72cb81074b939310269d488cbe8"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.729265 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" event={"ID":"eb683b6d-9110-46e1-8406-eea86d9cc73b","Type":"ContainerStarted","Data":"ed1388c6eb28c157030933478df87642f4fba3d9c198c284f1958d42816f2e6a"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.729301 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" event={"ID":"eb683b6d-9110-46e1-8406-eea86d9cc73b","Type":"ContainerStarted","Data":"91fac1793a7a2b8a269edafca995d78c1aceb7914291bdd22c295ca0ed226b45"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.729350 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-7tt6b" podStartSLOduration=8.729309869 podStartE2EDuration="8.729309869s" podCreationTimestamp="2026-01-30 21:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:41.728238641 +0000 UTC m=+1340.474061290" watchObservedRunningTime="2026-01-30 21:36:41.729309869 +0000 UTC m=+1340.475132518" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.735440 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.772375 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" event={"ID":"4fbc6a33-d240-4982-ade1-668f5da8b516","Type":"ContainerStarted","Data":"963c152112c095b417af6d89f95dac5ff1eb3a21950942a6d257f3fa15a08da7"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.772554 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" event={"ID":"4fbc6a33-d240-4982-ade1-668f5da8b516","Type":"ContainerStarted","Data":"697aedf9ffbdda8a06dbe5ac5680879f0f4a2aad04f2d5ce719596367a25a035"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.817444 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-fed1-account-create-update-ztdkt" podStartSLOduration=8.817425921 podStartE2EDuration="8.817425921s" podCreationTimestamp="2026-01-30 21:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:41.767642723 +0000 UTC m=+1340.513465372" watchObservedRunningTime="2026-01-30 21:36:41.817425921 +0000 UTC m=+1340.563248570" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.865339 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" podStartSLOduration=6.865299059 podStartE2EDuration="6.865299059s" podCreationTimestamp="2026-01-30 21:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:41.804025093 +0000 UTC m=+1340.549847742" watchObservedRunningTime="2026-01-30 21:36:41.865299059 +0000 UTC m=+1340.611121728" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.935539 4751 scope.go:117] "RemoveContainer" containerID="ceb05bc26ceb74ab33848c8dc2ae7e47ba1d5123056e72a37cc4e8a9b93ad605" Jan 30 21:36:42 crc kubenswrapper[4751]: E0130 21:36:42.130869 4751 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod722402dd_bf51_47a6_b20e_85aec93527d9.slice/crio-conmon-d6a6c7d319a747790016da0d8bf07d4ab98c3d010eb7ce4cdc966d03c722da28.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod722402dd_bf51_47a6_b20e_85aec93527d9.slice/crio-d6a6c7d319a747790016da0d8bf07d4ab98c3d010eb7ce4cdc966d03c722da28.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb683b6d_9110_46e1_8406_eea86d9cc73b.slice/crio-ed1388c6eb28c157030933478df87642f4fba3d9c198c284f1958d42816f2e6a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fbc6a33_d240_4982_ade1_668f5da8b516.slice/crio-963c152112c095b417af6d89f95dac5ff1eb3a21950942a6d257f3fa15a08da7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb683b6d_9110_46e1_8406_eea86d9cc73b.slice/crio-conmon-ed1388c6eb28c157030933478df87642f4fba3d9c198c284f1958d42816f2e6a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fbc6a33_d240_4982_ade1_668f5da8b516.slice/crio-conmon-963c152112c095b417af6d89f95dac5ff1eb3a21950942a6d257f3fa15a08da7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55099194_6cb2_437d_ae0d_a08c104de380.slice/crio-73515e94e6f7a825d6c9ac37458f6d6de21c87de5edaea4b69d38594e2145bf0.scope\": RecentStats: unable to find data in memory cache]" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.304216 4751 scope.go:117] "RemoveContainer" containerID="b6b3c4d303d05b0a0f708157ec8513f5dfe4b966ad2d29989c2873f44e7cbabd" Jan 30 21:36:42 crc kubenswrapper[4751]: E0130 21:36:42.305073 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6b3c4d303d05b0a0f708157ec8513f5dfe4b966ad2d29989c2873f44e7cbabd\": container with ID starting with b6b3c4d303d05b0a0f708157ec8513f5dfe4b966ad2d29989c2873f44e7cbabd not found: ID does not exist" containerID="b6b3c4d303d05b0a0f708157ec8513f5dfe4b966ad2d29989c2873f44e7cbabd" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.305102 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6b3c4d303d05b0a0f708157ec8513f5dfe4b966ad2d29989c2873f44e7cbabd"} err="failed to get container status \"b6b3c4d303d05b0a0f708157ec8513f5dfe4b966ad2d29989c2873f44e7cbabd\": rpc error: code = NotFound desc = could not find container \"b6b3c4d303d05b0a0f708157ec8513f5dfe4b966ad2d29989c2873f44e7cbabd\": container with ID starting with b6b3c4d303d05b0a0f708157ec8513f5dfe4b966ad2d29989c2873f44e7cbabd not found: ID does not exist" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.305121 4751 scope.go:117] "RemoveContainer" containerID="ceb05bc26ceb74ab33848c8dc2ae7e47ba1d5123056e72a37cc4e8a9b93ad605" Jan 30 21:36:42 crc kubenswrapper[4751]: E0130 21:36:42.305681 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceb05bc26ceb74ab33848c8dc2ae7e47ba1d5123056e72a37cc4e8a9b93ad605\": container with ID starting with ceb05bc26ceb74ab33848c8dc2ae7e47ba1d5123056e72a37cc4e8a9b93ad605 not found: ID does not exist" containerID="ceb05bc26ceb74ab33848c8dc2ae7e47ba1d5123056e72a37cc4e8a9b93ad605" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.305704 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceb05bc26ceb74ab33848c8dc2ae7e47ba1d5123056e72a37cc4e8a9b93ad605"} err="failed to get container status \"ceb05bc26ceb74ab33848c8dc2ae7e47ba1d5123056e72a37cc4e8a9b93ad605\": rpc error: code = NotFound desc = could not find container \"ceb05bc26ceb74ab33848c8dc2ae7e47ba1d5123056e72a37cc4e8a9b93ad605\": container with ID starting with ceb05bc26ceb74ab33848c8dc2ae7e47ba1d5123056e72a37cc4e8a9b93ad605 not found: ID does not exist" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.335076 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mxcnd" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.461897 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxklc\" (UniqueName: \"kubernetes.io/projected/ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5-kube-api-access-pxklc\") pod \"ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5\" (UID: \"ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5\") " Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.461951 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5-operator-scripts\") pod \"ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5\" (UID: \"ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5\") " Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.463772 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5" (UID: "ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.486078 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5-kube-api-access-pxklc" (OuterVolumeSpecName: "kube-api-access-pxklc") pod "ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5" (UID: "ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5"). InnerVolumeSpecName "kube-api-access-pxklc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.564808 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxklc\" (UniqueName: \"kubernetes.io/projected/ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5-kube-api-access-pxklc\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.564842 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.785937 4751 generic.go:334] "Generic (PLEG): container finished" podID="55099194-6cb2-437d-ae0d-a08c104de380" containerID="73515e94e6f7a825d6c9ac37458f6d6de21c87de5edaea4b69d38594e2145bf0" exitCode=0 Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.786264 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" event={"ID":"55099194-6cb2-437d-ae0d-a08c104de380","Type":"ContainerDied","Data":"73515e94e6f7a825d6c9ac37458f6d6de21c87de5edaea4b69d38594e2145bf0"} Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.794970 4751 generic.go:334] "Generic (PLEG): container finished" podID="eb683b6d-9110-46e1-8406-eea86d9cc73b" containerID="ed1388c6eb28c157030933478df87642f4fba3d9c198c284f1958d42816f2e6a" exitCode=0 Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.795020 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" event={"ID":"eb683b6d-9110-46e1-8406-eea86d9cc73b","Type":"ContainerDied","Data":"ed1388c6eb28c157030933478df87642f4fba3d9c198c284f1958d42816f2e6a"} Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.795042 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" event={"ID":"eb683b6d-9110-46e1-8406-eea86d9cc73b","Type":"ContainerStarted","Data":"d13bdba61d4e84c62b4410d765f4f99e77b7c81d9c8b2fd1ad7ff51b9c7b511e"} Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.802525 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.816989 4751 generic.go:334] "Generic (PLEG): container finished" podID="e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2" containerID="47ae5041500feb907ed9d9736f2e4bbce3e444b85130301585ffd13ba081d9a9" exitCode=0 Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.817059 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-hgg7b" event={"ID":"e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2","Type":"ContainerDied","Data":"47ae5041500feb907ed9d9736f2e4bbce3e444b85130301585ffd13ba081d9a9"} Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.822181 4751 generic.go:334] "Generic (PLEG): container finished" podID="7be55860-0016-49cf-9505-9692dd9ccd36" containerID="ded685defb3526390eca5f7cb2d53cfb12497b060a9cc1ce297a52cc7244f151" exitCode=0 Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.822362 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8xlxv" event={"ID":"7be55860-0016-49cf-9505-9692dd9ccd36","Type":"ContainerDied","Data":"ded685defb3526390eca5f7cb2d53cfb12497b060a9cc1ce297a52cc7244f151"} Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.822384 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8xlxv" event={"ID":"7be55860-0016-49cf-9505-9692dd9ccd36","Type":"ContainerStarted","Data":"414a9d0ca0d8ab7d602fb4a81109d4833ae86e6bbc20c6fb24a116f28a92d0b4"} Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.825434 4751 generic.go:334] "Generic (PLEG): container finished" podID="4fbc6a33-d240-4982-ade1-668f5da8b516" containerID="963c152112c095b417af6d89f95dac5ff1eb3a21950942a6d257f3fa15a08da7" exitCode=0 Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.825478 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" event={"ID":"4fbc6a33-d240-4982-ade1-668f5da8b516","Type":"ContainerDied","Data":"963c152112c095b417af6d89f95dac5ff1eb3a21950942a6d257f3fa15a08da7"} Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.825839 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" podStartSLOduration=7.825818922 podStartE2EDuration="7.825818922s" podCreationTimestamp="2026-01-30 21:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:42.820523151 +0000 UTC m=+1341.566345800" watchObservedRunningTime="2026-01-30 21:36:42.825818922 +0000 UTC m=+1341.571641581" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.827221 4751 generic.go:334] "Generic (PLEG): container finished" podID="37f348fb-7f83-40db-98b2-7e8bc603a3e6" containerID="1b5908039e6b19df93f09b06f432ee6033fa0e6a44029f167f2bd610adfb389f" exitCode=0 Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.827255 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a004-account-create-update-zkpzg" event={"ID":"37f348fb-7f83-40db-98b2-7e8bc603a3e6","Type":"ContainerDied","Data":"1b5908039e6b19df93f09b06f432ee6033fa0e6a44029f167f2bd610adfb389f"} Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.828635 4751 generic.go:334] "Generic (PLEG): container finished" podID="722402dd-bf51-47a6-b20e-85aec93527d9" containerID="d6a6c7d319a747790016da0d8bf07d4ab98c3d010eb7ce4cdc966d03c722da28" exitCode=0 Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.828671 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1da7-account-create-update-q9cg8" event={"ID":"722402dd-bf51-47a6-b20e-85aec93527d9","Type":"ContainerDied","Data":"d6a6c7d319a747790016da0d8bf07d4ab98c3d010eb7ce4cdc966d03c722da28"} Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.830144 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-mxcnd" event={"ID":"ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5","Type":"ContainerDied","Data":"80ee5941bdfee40d36adda8b22fe45b35867b32b4e232e517bcd95751ece2d05"} Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.830166 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80ee5941bdfee40d36adda8b22fe45b35867b32b4e232e517bcd95751ece2d05" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.830229 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mxcnd" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.843035 4751 generic.go:334] "Generic (PLEG): container finished" podID="d1373e37-3653-4f5d-9978-9d1cca4e546b" containerID="327aabac3be4ee9fde091b36b1b374aaf9d59f04f57b4504442450704eca0e64" exitCode=0 Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.843107 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fed1-account-create-update-ztdkt" event={"ID":"d1373e37-3653-4f5d-9978-9d1cca4e546b","Type":"ContainerDied","Data":"327aabac3be4ee9fde091b36b1b374aaf9d59f04f57b4504442450704eca0e64"} Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.848135 4751 generic.go:334] "Generic (PLEG): container finished" podID="93341dcd-a293-4879-8baf-855556383780" containerID="c18d43f25fad540cc4b6980ee198b0b5113db4829b6825bf308264ef91e01601" exitCode=0 Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.848223 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7tt6b" event={"ID":"93341dcd-a293-4879-8baf-855556383780","Type":"ContainerDied","Data":"c18d43f25fad540cc4b6980ee198b0b5113db4829b6825bf308264ef91e01601"} Jan 30 21:36:44 crc kubenswrapper[4751]: I0130 21:36:44.537377 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:44 crc kubenswrapper[4751]: E0130 21:36:44.537524 4751 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 21:36:44 crc kubenswrapper[4751]: E0130 21:36:44.538037 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 21:36:44 crc kubenswrapper[4751]: E0130 21:36:44.538095 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift podName:4f6a1442-f7f7-499a-a7d5-c354d76ba9d5 nodeName:}" failed. No retries permitted until 2026-01-30 21:36:52.538076106 +0000 UTC m=+1351.283898755 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift") pod "swift-storage-0" (UID: "4f6a1442-f7f7-499a-a7d5-c354d76ba9d5") : configmap "swift-ring-files" not found Jan 30 21:36:46 crc kubenswrapper[4751]: I0130 21:36:46.297506 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:48 crc kubenswrapper[4751]: I0130 21:36:48.926567 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" event={"ID":"4fbc6a33-d240-4982-ade1-668f5da8b516","Type":"ContainerDied","Data":"697aedf9ffbdda8a06dbe5ac5680879f0f4a2aad04f2d5ce719596367a25a035"} Jan 30 21:36:48 crc kubenswrapper[4751]: I0130 21:36:48.927001 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="697aedf9ffbdda8a06dbe5ac5680879f0f4a2aad04f2d5ce719596367a25a035" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.165745 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.199935 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.207267 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8xlxv" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.232794 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1da7-account-create-update-q9cg8" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.250976 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7tt6b" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.266609 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fed1-account-create-update-ztdkt" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.279263 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a004-account-create-update-zkpzg" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.298058 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-hgg7b" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.350692 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj7db\" (UniqueName: \"kubernetes.io/projected/722402dd-bf51-47a6-b20e-85aec93527d9-kube-api-access-zj7db\") pod \"722402dd-bf51-47a6-b20e-85aec93527d9\" (UID: \"722402dd-bf51-47a6-b20e-85aec93527d9\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.351001 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t95l\" (UniqueName: \"kubernetes.io/projected/55099194-6cb2-437d-ae0d-a08c104de380-kube-api-access-6t95l\") pod \"55099194-6cb2-437d-ae0d-a08c104de380\" (UID: \"55099194-6cb2-437d-ae0d-a08c104de380\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.351023 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/722402dd-bf51-47a6-b20e-85aec93527d9-operator-scripts\") pod \"722402dd-bf51-47a6-b20e-85aec93527d9\" (UID: \"722402dd-bf51-47a6-b20e-85aec93527d9\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.351097 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7be55860-0016-49cf-9505-9692dd9ccd36-operator-scripts\") pod \"7be55860-0016-49cf-9505-9692dd9ccd36\" (UID: \"7be55860-0016-49cf-9505-9692dd9ccd36\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.351143 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55099194-6cb2-437d-ae0d-a08c104de380-operator-scripts\") pod \"55099194-6cb2-437d-ae0d-a08c104de380\" (UID: \"55099194-6cb2-437d-ae0d-a08c104de380\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.351181 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x92m\" (UniqueName: \"kubernetes.io/projected/93341dcd-a293-4879-8baf-855556383780-kube-api-access-9x92m\") pod \"93341dcd-a293-4879-8baf-855556383780\" (UID: \"93341dcd-a293-4879-8baf-855556383780\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.351227 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glgbl\" (UniqueName: \"kubernetes.io/projected/7be55860-0016-49cf-9505-9692dd9ccd36-kube-api-access-glgbl\") pod \"7be55860-0016-49cf-9505-9692dd9ccd36\" (UID: \"7be55860-0016-49cf-9505-9692dd9ccd36\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.351297 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93341dcd-a293-4879-8baf-855556383780-operator-scripts\") pod \"93341dcd-a293-4879-8baf-855556383780\" (UID: \"93341dcd-a293-4879-8baf-855556383780\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.351343 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fbc6a33-d240-4982-ade1-668f5da8b516-operator-scripts\") pod \"4fbc6a33-d240-4982-ade1-668f5da8b516\" (UID: \"4fbc6a33-d240-4982-ade1-668f5da8b516\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.351373 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phkvp\" (UniqueName: \"kubernetes.io/projected/4fbc6a33-d240-4982-ade1-668f5da8b516-kube-api-access-phkvp\") pod \"4fbc6a33-d240-4982-ade1-668f5da8b516\" (UID: \"4fbc6a33-d240-4982-ade1-668f5da8b516\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.351751 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55099194-6cb2-437d-ae0d-a08c104de380-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55099194-6cb2-437d-ae0d-a08c104de380" (UID: "55099194-6cb2-437d-ae0d-a08c104de380"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.351907 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55099194-6cb2-437d-ae0d-a08c104de380-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.352449 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/722402dd-bf51-47a6-b20e-85aec93527d9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "722402dd-bf51-47a6-b20e-85aec93527d9" (UID: "722402dd-bf51-47a6-b20e-85aec93527d9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.352577 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7be55860-0016-49cf-9505-9692dd9ccd36-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7be55860-0016-49cf-9505-9692dd9ccd36" (UID: "7be55860-0016-49cf-9505-9692dd9ccd36"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.353047 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fbc6a33-d240-4982-ade1-668f5da8b516-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4fbc6a33-d240-4982-ade1-668f5da8b516" (UID: "4fbc6a33-d240-4982-ade1-668f5da8b516"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.354061 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93341dcd-a293-4879-8baf-855556383780-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93341dcd-a293-4879-8baf-855556383780" (UID: "93341dcd-a293-4879-8baf-855556383780"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.356843 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7be55860-0016-49cf-9505-9692dd9ccd36-kube-api-access-glgbl" (OuterVolumeSpecName: "kube-api-access-glgbl") pod "7be55860-0016-49cf-9505-9692dd9ccd36" (UID: "7be55860-0016-49cf-9505-9692dd9ccd36"). InnerVolumeSpecName "kube-api-access-glgbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.358727 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93341dcd-a293-4879-8baf-855556383780-kube-api-access-9x92m" (OuterVolumeSpecName: "kube-api-access-9x92m") pod "93341dcd-a293-4879-8baf-855556383780" (UID: "93341dcd-a293-4879-8baf-855556383780"). InnerVolumeSpecName "kube-api-access-9x92m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.359309 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fbc6a33-d240-4982-ade1-668f5da8b516-kube-api-access-phkvp" (OuterVolumeSpecName: "kube-api-access-phkvp") pod "4fbc6a33-d240-4982-ade1-668f5da8b516" (UID: "4fbc6a33-d240-4982-ade1-668f5da8b516"). InnerVolumeSpecName "kube-api-access-phkvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.359651 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/722402dd-bf51-47a6-b20e-85aec93527d9-kube-api-access-zj7db" (OuterVolumeSpecName: "kube-api-access-zj7db") pod "722402dd-bf51-47a6-b20e-85aec93527d9" (UID: "722402dd-bf51-47a6-b20e-85aec93527d9"). InnerVolumeSpecName "kube-api-access-zj7db". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.359678 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55099194-6cb2-437d-ae0d-a08c104de380-kube-api-access-6t95l" (OuterVolumeSpecName: "kube-api-access-6t95l") pod "55099194-6cb2-437d-ae0d-a08c104de380" (UID: "55099194-6cb2-437d-ae0d-a08c104de380"). InnerVolumeSpecName "kube-api-access-6t95l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.452994 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2-operator-scripts\") pod \"e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2\" (UID: \"e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.453275 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hpll\" (UniqueName: \"kubernetes.io/projected/e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2-kube-api-access-9hpll\") pod \"e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2\" (UID: \"e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.453363 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1373e37-3653-4f5d-9978-9d1cca4e546b-operator-scripts\") pod \"d1373e37-3653-4f5d-9978-9d1cca4e546b\" (UID: \"d1373e37-3653-4f5d-9978-9d1cca4e546b\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.453409 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37f348fb-7f83-40db-98b2-7e8bc603a3e6-operator-scripts\") pod \"37f348fb-7f83-40db-98b2-7e8bc603a3e6\" (UID: \"37f348fb-7f83-40db-98b2-7e8bc603a3e6\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.453474 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vdvs\" (UniqueName: \"kubernetes.io/projected/d1373e37-3653-4f5d-9978-9d1cca4e546b-kube-api-access-9vdvs\") pod \"d1373e37-3653-4f5d-9978-9d1cca4e546b\" (UID: \"d1373e37-3653-4f5d-9978-9d1cca4e546b\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.453502 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mphdl\" (UniqueName: \"kubernetes.io/projected/37f348fb-7f83-40db-98b2-7e8bc603a3e6-kube-api-access-mphdl\") pod \"37f348fb-7f83-40db-98b2-7e8bc603a3e6\" (UID: \"37f348fb-7f83-40db-98b2-7e8bc603a3e6\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.453557 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2" (UID: "e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.453837 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1373e37-3653-4f5d-9978-9d1cca4e546b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d1373e37-3653-4f5d-9978-9d1cca4e546b" (UID: "d1373e37-3653-4f5d-9978-9d1cca4e546b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.454165 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37f348fb-7f83-40db-98b2-7e8bc603a3e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "37f348fb-7f83-40db-98b2-7e8bc603a3e6" (UID: "37f348fb-7f83-40db-98b2-7e8bc603a3e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.454989 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zj7db\" (UniqueName: \"kubernetes.io/projected/722402dd-bf51-47a6-b20e-85aec93527d9-kube-api-access-zj7db\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.455024 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1373e37-3653-4f5d-9978-9d1cca4e546b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.455044 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t95l\" (UniqueName: \"kubernetes.io/projected/55099194-6cb2-437d-ae0d-a08c104de380-kube-api-access-6t95l\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.455062 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/722402dd-bf51-47a6-b20e-85aec93527d9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.455083 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37f348fb-7f83-40db-98b2-7e8bc603a3e6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.455103 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7be55860-0016-49cf-9505-9692dd9ccd36-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.455120 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x92m\" (UniqueName: \"kubernetes.io/projected/93341dcd-a293-4879-8baf-855556383780-kube-api-access-9x92m\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.455137 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glgbl\" (UniqueName: \"kubernetes.io/projected/7be55860-0016-49cf-9505-9692dd9ccd36-kube-api-access-glgbl\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.455156 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.455173 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93341dcd-a293-4879-8baf-855556383780-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.455191 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fbc6a33-d240-4982-ade1-668f5da8b516-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.455209 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phkvp\" (UniqueName: \"kubernetes.io/projected/4fbc6a33-d240-4982-ade1-668f5da8b516-kube-api-access-phkvp\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.456074 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37f348fb-7f83-40db-98b2-7e8bc603a3e6-kube-api-access-mphdl" (OuterVolumeSpecName: "kube-api-access-mphdl") pod "37f348fb-7f83-40db-98b2-7e8bc603a3e6" (UID: "37f348fb-7f83-40db-98b2-7e8bc603a3e6"). InnerVolumeSpecName "kube-api-access-mphdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.457502 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1373e37-3653-4f5d-9978-9d1cca4e546b-kube-api-access-9vdvs" (OuterVolumeSpecName: "kube-api-access-9vdvs") pod "d1373e37-3653-4f5d-9978-9d1cca4e546b" (UID: "d1373e37-3653-4f5d-9978-9d1cca4e546b"). InnerVolumeSpecName "kube-api-access-9vdvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.457848 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2-kube-api-access-9hpll" (OuterVolumeSpecName: "kube-api-access-9hpll") pod "e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2" (UID: "e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2"). InnerVolumeSpecName "kube-api-access-9hpll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.556544 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hpll\" (UniqueName: \"kubernetes.io/projected/e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2-kube-api-access-9hpll\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.556576 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vdvs\" (UniqueName: \"kubernetes.io/projected/d1373e37-3653-4f5d-9978-9d1cca4e546b-kube-api-access-9vdvs\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.556588 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mphdl\" (UniqueName: \"kubernetes.io/projected/37f348fb-7f83-40db-98b2-7e8bc603a3e6-kube-api-access-mphdl\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.936360 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8xlxv" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.936381 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8xlxv" event={"ID":"7be55860-0016-49cf-9505-9692dd9ccd36","Type":"ContainerDied","Data":"414a9d0ca0d8ab7d602fb4a81109d4833ae86e6bbc20c6fb24a116f28a92d0b4"} Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.936415 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="414a9d0ca0d8ab7d602fb4a81109d4833ae86e6bbc20c6fb24a116f28a92d0b4" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.938686 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fed1-account-create-update-ztdkt" event={"ID":"d1373e37-3653-4f5d-9978-9d1cca4e546b","Type":"ContainerDied","Data":"4ada26d1b2093244b4841da5d56a5cb27cf06118eae54dce88a395277f8ba995"} Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.938727 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fed1-account-create-update-ztdkt" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.938744 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ada26d1b2093244b4841da5d56a5cb27cf06118eae54dce88a395277f8ba995" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.940754 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7tt6b" event={"ID":"93341dcd-a293-4879-8baf-855556383780","Type":"ContainerDied","Data":"d2cd1c9c8696f97faaf38f91732e913b1ec957c1fea1e4ffc25d93f7701a0b4e"} Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.940777 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2cd1c9c8696f97faaf38f91732e913b1ec957c1fea1e4ffc25d93f7701a0b4e" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.940778 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7tt6b" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.942663 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" event={"ID":"55099194-6cb2-437d-ae0d-a08c104de380","Type":"ContainerDied","Data":"ac159ae2cb6976ef8122c35a06fe61ae9b29a654dcff59cb32ef375cbdebcd34"} Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.942696 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac159ae2cb6976ef8122c35a06fe61ae9b29a654dcff59cb32ef375cbdebcd34" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.942747 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.951388 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vvq25" event={"ID":"70af95fb-5ca8-4482-a1bc-81b1891e0da7","Type":"ContainerStarted","Data":"d406940fb6742e9578d31d784c5dc7b728af135cada6bb76ae850b8c64dbd1f2"} Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.956119 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1da7-account-create-update-q9cg8" event={"ID":"722402dd-bf51-47a6-b20e-85aec93527d9","Type":"ContainerDied","Data":"e12b2d3c1c414c985bea152833a4e7437f3747bfd05adf3b5739d215a1fadf48"} Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.956143 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1da7-account-create-update-q9cg8" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.956153 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e12b2d3c1c414c985bea152833a4e7437f3747bfd05adf3b5739d215a1fadf48" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.958387 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a004-account-create-update-zkpzg" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.958407 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a004-account-create-update-zkpzg" event={"ID":"37f348fb-7f83-40db-98b2-7e8bc603a3e6","Type":"ContainerDied","Data":"df2d8a98804dd8716d1b278672e379bd608478eb951fa38ce8e97562ba876f31"} Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.958438 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df2d8a98804dd8716d1b278672e379bd608478eb951fa38ce8e97562ba876f31" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.961667 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d56430b1-227c-4074-8d43-86953ab9f911","Type":"ContainerStarted","Data":"5e628f84f2b1e0bad7fea4bb8e7b42341154ce8e229a8f36477102accdee0cfb"} Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.964999 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.965010 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-hgg7b" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.964989 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-hgg7b" event={"ID":"e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2","Type":"ContainerDied","Data":"c773f4baaf7dd922c65d6ad7b794d9c6a5b5a5d6d85daf6f5c9bf7f785bbaedf"} Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.965280 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c773f4baaf7dd922c65d6ad7b794d9c6a5b5a5d6d85daf6f5c9bf7f785bbaedf" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.983475 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-vvq25" podStartSLOduration=5.778478441 podStartE2EDuration="13.983459006s" podCreationTimestamp="2026-01-30 21:36:36 +0000 UTC" firstStartedPulling="2026-01-30 21:36:40.83002564 +0000 UTC m=+1339.575848289" lastFinishedPulling="2026-01-30 21:36:49.035006205 +0000 UTC m=+1347.780828854" observedRunningTime="2026-01-30 21:36:49.972038472 +0000 UTC m=+1348.717861121" watchObservedRunningTime="2026-01-30 21:36:49.983459006 +0000 UTC m=+1348.729281655" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.888867 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-29gtt"] Jan 30 21:36:50 crc kubenswrapper[4751]: E0130 21:36:50.889588 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be55860-0016-49cf-9505-9692dd9ccd36" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.889606 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be55860-0016-49cf-9505-9692dd9ccd36" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: E0130 21:36:50.889621 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55099194-6cb2-437d-ae0d-a08c104de380" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.889628 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="55099194-6cb2-437d-ae0d-a08c104de380" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: E0130 21:36:50.889646 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fbc6a33-d240-4982-ade1-668f5da8b516" containerName="mariadb-database-create" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.889652 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fbc6a33-d240-4982-ade1-668f5da8b516" containerName="mariadb-database-create" Jan 30 21:36:50 crc kubenswrapper[4751]: E0130 21:36:50.889672 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f39785c-2919-4c29-8405-fd314710c587" containerName="init" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.889678 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f39785c-2919-4c29-8405-fd314710c587" containerName="init" Jan 30 21:36:50 crc kubenswrapper[4751]: E0130 21:36:50.889689 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f39785c-2919-4c29-8405-fd314710c587" containerName="dnsmasq-dns" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.889696 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f39785c-2919-4c29-8405-fd314710c587" containerName="dnsmasq-dns" Jan 30 21:36:50 crc kubenswrapper[4751]: E0130 21:36:50.889709 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93341dcd-a293-4879-8baf-855556383780" containerName="mariadb-database-create" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.889715 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="93341dcd-a293-4879-8baf-855556383780" containerName="mariadb-database-create" Jan 30 21:36:50 crc kubenswrapper[4751]: E0130 21:36:50.889726 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2" containerName="mariadb-database-create" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.889734 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2" containerName="mariadb-database-create" Jan 30 21:36:50 crc kubenswrapper[4751]: E0130 21:36:50.889744 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5" containerName="mariadb-database-create" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.889750 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5" containerName="mariadb-database-create" Jan 30 21:36:50 crc kubenswrapper[4751]: E0130 21:36:50.889759 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="722402dd-bf51-47a6-b20e-85aec93527d9" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.889765 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="722402dd-bf51-47a6-b20e-85aec93527d9" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: E0130 21:36:50.889777 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1373e37-3653-4f5d-9978-9d1cca4e546b" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.889783 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1373e37-3653-4f5d-9978-9d1cca4e546b" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: E0130 21:36:50.889796 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37f348fb-7f83-40db-98b2-7e8bc603a3e6" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.889802 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="37f348fb-7f83-40db-98b2-7e8bc603a3e6" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.889982 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2" containerName="mariadb-database-create" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.890004 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="55099194-6cb2-437d-ae0d-a08c104de380" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.890015 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="722402dd-bf51-47a6-b20e-85aec93527d9" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.890025 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5" containerName="mariadb-database-create" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.890036 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be55860-0016-49cf-9505-9692dd9ccd36" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.890046 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1373e37-3653-4f5d-9978-9d1cca4e546b" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.890062 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="37f348fb-7f83-40db-98b2-7e8bc603a3e6" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.890074 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f39785c-2919-4c29-8405-fd314710c587" containerName="dnsmasq-dns" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.890080 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="93341dcd-a293-4879-8baf-855556383780" containerName="mariadb-database-create" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.890091 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fbc6a33-d240-4982-ade1-668f5da8b516" containerName="mariadb-database-create" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.890789 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.911139 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-29gtt"] Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.973533 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.994798 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbcl8\" (UniqueName: \"kubernetes.io/projected/d36824ca-c5a8-4514-9276-e49126a66018-kube-api-access-cbcl8\") pod \"mysqld-exporter-openstack-cell1-db-create-29gtt\" (UID: \"d36824ca-c5a8-4514-9276-e49126a66018\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.995094 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d36824ca-c5a8-4514-9276-e49126a66018-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-29gtt\" (UID: \"d36824ca-c5a8-4514-9276-e49126a66018\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.041617 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-pl94b"] Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.041826 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-pl94b" podUID="aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" containerName="dnsmasq-dns" containerID="cri-o://6cd06f2bb56b148e8bf2fd2524c5d527d970ea6c6b7ba394cc56edcda374faf1" gracePeriod=10 Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.099865 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbcl8\" (UniqueName: \"kubernetes.io/projected/d36824ca-c5a8-4514-9276-e49126a66018-kube-api-access-cbcl8\") pod \"mysqld-exporter-openstack-cell1-db-create-29gtt\" (UID: \"d36824ca-c5a8-4514-9276-e49126a66018\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.100451 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d36824ca-c5a8-4514-9276-e49126a66018-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-29gtt\" (UID: \"d36824ca-c5a8-4514-9276-e49126a66018\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.101410 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-e51b-account-create-update-bskb2"] Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.101786 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d36824ca-c5a8-4514-9276-e49126a66018-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-29gtt\" (UID: \"d36824ca-c5a8-4514-9276-e49126a66018\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.102722 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.105545 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.109739 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-e51b-account-create-update-bskb2"] Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.180703 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbcl8\" (UniqueName: \"kubernetes.io/projected/d36824ca-c5a8-4514-9276-e49126a66018-kube-api-access-cbcl8\") pod \"mysqld-exporter-openstack-cell1-db-create-29gtt\" (UID: \"d36824ca-c5a8-4514-9276-e49126a66018\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.202928 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f2ec939-8595-4611-a636-a46fffaa8ebf-operator-scripts\") pod \"mysqld-exporter-e51b-account-create-update-bskb2\" (UID: \"1f2ec939-8595-4611-a636-a46fffaa8ebf\") " pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.203049 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbqbm\" (UniqueName: \"kubernetes.io/projected/1f2ec939-8595-4611-a636-a46fffaa8ebf-kube-api-access-nbqbm\") pod \"mysqld-exporter-e51b-account-create-update-bskb2\" (UID: \"1f2ec939-8595-4611-a636-a46fffaa8ebf\") " pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.207671 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.295892 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-pl94b" podUID="aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.151:5353: connect: connection refused" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.305214 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f2ec939-8595-4611-a636-a46fffaa8ebf-operator-scripts\") pod \"mysqld-exporter-e51b-account-create-update-bskb2\" (UID: \"1f2ec939-8595-4611-a636-a46fffaa8ebf\") " pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.305361 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbqbm\" (UniqueName: \"kubernetes.io/projected/1f2ec939-8595-4611-a636-a46fffaa8ebf-kube-api-access-nbqbm\") pod \"mysqld-exporter-e51b-account-create-update-bskb2\" (UID: \"1f2ec939-8595-4611-a636-a46fffaa8ebf\") " pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.306573 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f2ec939-8595-4611-a636-a46fffaa8ebf-operator-scripts\") pod \"mysqld-exporter-e51b-account-create-update-bskb2\" (UID: \"1f2ec939-8595-4611-a636-a46fffaa8ebf\") " pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.323339 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbqbm\" (UniqueName: \"kubernetes.io/projected/1f2ec939-8595-4611-a636-a46fffaa8ebf-kube-api-access-nbqbm\") pod \"mysqld-exporter-e51b-account-create-update-bskb2\" (UID: \"1f2ec939-8595-4611-a636-a46fffaa8ebf\") " pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.421623 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.776957 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-8xlxv"] Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.785939 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-8xlxv"] Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.880568 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-29gtt"] Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.971167 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.987619 4751 generic.go:334] "Generic (PLEG): container finished" podID="f18b5d57-5b05-4ef0-bae3-68938e094510" containerID="754b224d6c0aec71e7dd9667dbd15b0273b071f0dbfebe749ccad88991070256" exitCode=0 Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.990294 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7be55860-0016-49cf-9505-9692dd9ccd36" path="/var/lib/kubelet/pods/7be55860-0016-49cf-9505-9692dd9ccd36/volumes" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.990825 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f18b5d57-5b05-4ef0-bae3-68938e094510","Type":"ContainerDied","Data":"754b224d6c0aec71e7dd9667dbd15b0273b071f0dbfebe749ccad88991070256"} Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.991314 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" event={"ID":"d36824ca-c5a8-4514-9276-e49126a66018","Type":"ContainerStarted","Data":"bfc01980f997333ad874c465402aadd998785e21bd958f0233d9e3ee82f2fd2d"} Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.995390 4751 generic.go:334] "Generic (PLEG): container finished" podID="aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" containerID="6cd06f2bb56b148e8bf2fd2524c5d527d970ea6c6b7ba394cc56edcda374faf1" exitCode=0 Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.995468 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-pl94b" event={"ID":"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809","Type":"ContainerDied","Data":"6cd06f2bb56b148e8bf2fd2524c5d527d970ea6c6b7ba394cc56edcda374faf1"} Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.995496 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-pl94b" event={"ID":"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809","Type":"ContainerDied","Data":"5084059983d5e17529806552161f9ac2cf353d5b0f25b7a0a25c23ba8ae664c9"} Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.995507 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5084059983d5e17529806552161f9ac2cf353d5b0f25b7a0a25c23ba8ae664c9" Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.000476 4751 generic.go:334] "Generic (PLEG): container finished" podID="2ed6288f-1f28-4189-a452-10ed3fa78c7f" containerID="dc43aef27eee6e5555871ea3e140a0c234f05afe3ded956404826b8a2999ed23" exitCode=0 Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.000546 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"2ed6288f-1f28-4189-a452-10ed3fa78c7f","Type":"ContainerDied","Data":"dc43aef27eee6e5555871ea3e140a0c234f05afe3ded956404826b8a2999ed23"} Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.009990 4751 generic.go:334] "Generic (PLEG): container finished" podID="192a5913-0c28-4214-9ac0-d37ca2eeb34c" containerID="6a2c138626ec1f6b7d91772998275ab4f054944271024ad8876c0420d7d4bbc9" exitCode=0 Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.010082 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"192a5913-0c28-4214-9ac0-d37ca2eeb34c","Type":"ContainerDied","Data":"6a2c138626ec1f6b7d91772998275ab4f054944271024ad8876c0420d7d4bbc9"} Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.015783 4751 generic.go:334] "Generic (PLEG): container finished" podID="61d75daf-41cb-4ab5-b849-c98080ca748b" containerID="cf3b264e8ec141124dc8cea806067e0197228587097f1a72076d1d5e3beee32f" exitCode=0 Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.015906 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"61d75daf-41cb-4ab5-b849-c98080ca748b","Type":"ContainerDied","Data":"cf3b264e8ec141124dc8cea806067e0197228587097f1a72076d1d5e3beee32f"} Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.091268 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-e51b-account-create-update-bskb2"] Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.140959 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.234970 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk2gf\" (UniqueName: \"kubernetes.io/projected/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-kube-api-access-hk2gf\") pod \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.235408 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-dns-svc\") pod \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.235481 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-nb\") pod \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.235617 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-sb\") pod \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.235639 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-config\") pod \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.252515 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-kube-api-access-hk2gf" (OuterVolumeSpecName: "kube-api-access-hk2gf") pod "aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" (UID: "aa6dba67-6d0e-4b49-b9dd-0905f6ffe809"). InnerVolumeSpecName "kube-api-access-hk2gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.315001 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" (UID: "aa6dba67-6d0e-4b49-b9dd-0905f6ffe809"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.316673 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" (UID: "aa6dba67-6d0e-4b49-b9dd-0905f6ffe809"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.324975 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-config" (OuterVolumeSpecName: "config") pod "aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" (UID: "aa6dba67-6d0e-4b49-b9dd-0905f6ffe809"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.336902 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" (UID: "aa6dba67-6d0e-4b49-b9dd-0905f6ffe809"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.338098 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-sb\") pod \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.338789 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk2gf\" (UniqueName: \"kubernetes.io/projected/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-kube-api-access-hk2gf\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.338805 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.338817 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.338827 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:52 crc kubenswrapper[4751]: W0130 21:36:52.338897 4751 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809/volumes/kubernetes.io~configmap/ovsdbserver-sb Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.338909 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" (UID: "aa6dba67-6d0e-4b49-b9dd-0905f6ffe809"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.348161 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6b64b75d5d-kgc46" podUID="bf03a732-e32e-410a-ae17-1573a2854475" containerName="console" containerID="cri-o://bddd330cf13a903e94930cf7c65192196ece6d61e6ec543ac96c6b64e5e23194" gracePeriod=15 Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.441390 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.543027 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:52 crc kubenswrapper[4751]: E0130 21:36:52.543219 4751 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 21:36:52 crc kubenswrapper[4751]: E0130 21:36:52.543247 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 21:36:52 crc kubenswrapper[4751]: E0130 21:36:52.543315 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift podName:4f6a1442-f7f7-499a-a7d5-c354d76ba9d5 nodeName:}" failed. No retries permitted until 2026-01-30 21:37:08.54329818 +0000 UTC m=+1367.289120819 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift") pod "swift-storage-0" (UID: "4f6a1442-f7f7-499a-a7d5-c354d76ba9d5") : configmap "swift-ring-files" not found Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.026606 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"61d75daf-41cb-4ab5-b849-c98080ca748b","Type":"ContainerStarted","Data":"654aa5cd180d3480262a0eb6327c9c516fd2aafbea0de4e5b807e47db7d88dd1"} Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.027070 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.030589 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f18b5d57-5b05-4ef0-bae3-68938e094510","Type":"ContainerStarted","Data":"fc68458705d52ffe188e292effdc48adcef036c4730d9da5c5e79dd0196fceee"} Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.030838 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.033859 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" event={"ID":"d36824ca-c5a8-4514-9276-e49126a66018","Type":"ContainerStarted","Data":"6d875e7e116aae53e17490ae6ad2fcbb1e85d4c6ca0051daa64edc6f242dd628"} Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.046160 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d56430b1-227c-4074-8d43-86953ab9f911","Type":"ContainerStarted","Data":"28f17c906b227a5af5cc4ace126e147801603741df157b46ecf7a45821fe1d9f"} Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.055973 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b64b75d5d-kgc46_bf03a732-e32e-410a-ae17-1573a2854475/console/0.log" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.056318 4751 generic.go:334] "Generic (PLEG): container finished" podID="bf03a732-e32e-410a-ae17-1573a2854475" containerID="bddd330cf13a903e94930cf7c65192196ece6d61e6ec543ac96c6b64e5e23194" exitCode=2 Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.056441 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b64b75d5d-kgc46" event={"ID":"bf03a732-e32e-410a-ae17-1573a2854475","Type":"ContainerDied","Data":"bddd330cf13a903e94930cf7c65192196ece6d61e6ec543ac96c6b64e5e23194"} Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.068657 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"2ed6288f-1f28-4189-a452-10ed3fa78c7f","Type":"ContainerStarted","Data":"c118273bc1b7e17b96ef2802a30e188177f69c364926f8d0532e695e28d4ca05"} Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.069499 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.069799 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371971.784992 podStartE2EDuration="1m5.069783851s" podCreationTimestamp="2026-01-30 21:35:48 +0000 UTC" firstStartedPulling="2026-01-30 21:35:51.233868574 +0000 UTC m=+1289.979691223" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:53.050532577 +0000 UTC m=+1351.796355236" watchObservedRunningTime="2026-01-30 21:36:53.069783851 +0000 UTC m=+1351.815606500" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.071995 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"192a5913-0c28-4214-9ac0-d37ca2eeb34c","Type":"ContainerStarted","Data":"8694789fa0038f6976a755ccc1f09ff5edec94cba32aab400030d4cae96b540d"} Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.072257 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.073682 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.073825 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" event={"ID":"1f2ec939-8595-4611-a636-a46fffaa8ebf","Type":"ContainerStarted","Data":"a4c6ded6bcebfebc69538cc39c7d557a4894403ba3b9a46406bf7a54b2fb9107"} Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.073857 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" event={"ID":"1f2ec939-8595-4611-a636-a46fffaa8ebf","Type":"ContainerStarted","Data":"81a17dec6aef6b0552f0eafd7b80b580f1d50832eef7c25f9ef93e23411b5b8e"} Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.138453 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371971.716343 podStartE2EDuration="1m5.138432162s" podCreationTimestamp="2026-01-30 21:35:48 +0000 UTC" firstStartedPulling="2026-01-30 21:35:50.661021804 +0000 UTC m=+1289.406844453" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:53.109262354 +0000 UTC m=+1351.855085013" watchObservedRunningTime="2026-01-30 21:36:53.138432162 +0000 UTC m=+1351.884254811" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.138907 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" podStartSLOduration=3.138902554 podStartE2EDuration="3.138902554s" podCreationTimestamp="2026-01-30 21:36:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:53.125877427 +0000 UTC m=+1351.871700076" watchObservedRunningTime="2026-01-30 21:36:53.138902554 +0000 UTC m=+1351.884725203" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.163961 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-pl94b"] Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.172068 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-pl94b"] Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.239430 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=-9223371971.615358 podStartE2EDuration="1m5.239416987s" podCreationTimestamp="2026-01-30 21:35:48 +0000 UTC" firstStartedPulling="2026-01-30 21:35:51.152886971 +0000 UTC m=+1289.898709620" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:53.211928873 +0000 UTC m=+1351.957751512" watchObservedRunningTime="2026-01-30 21:36:53.239416987 +0000 UTC m=+1351.985239636" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.268455 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" podStartSLOduration=2.268441802 podStartE2EDuration="2.268441802s" podCreationTimestamp="2026-01-30 21:36:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:53.264169108 +0000 UTC m=+1352.009991757" watchObservedRunningTime="2026-01-30 21:36:53.268441802 +0000 UTC m=+1352.014264451" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.376383 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=39.937956416 podStartE2EDuration="1m5.376358222s" podCreationTimestamp="2026-01-30 21:35:48 +0000 UTC" firstStartedPulling="2026-01-30 21:35:51.151582247 +0000 UTC m=+1289.897404896" lastFinishedPulling="2026-01-30 21:36:16.589984013 +0000 UTC m=+1315.335806702" observedRunningTime="2026-01-30 21:36:53.364128286 +0000 UTC m=+1352.109950945" watchObservedRunningTime="2026-01-30 21:36:53.376358222 +0000 UTC m=+1352.122180891" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.446393 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b64b75d5d-kgc46_bf03a732-e32e-410a-ae17-1573a2854475/console/0.log" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.446451 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.567888 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf03a732-e32e-410a-ae17-1573a2854475-console-oauth-config\") pod \"bf03a732-e32e-410a-ae17-1573a2854475\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.567974 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn5wq\" (UniqueName: \"kubernetes.io/projected/bf03a732-e32e-410a-ae17-1573a2854475-kube-api-access-zn5wq\") pod \"bf03a732-e32e-410a-ae17-1573a2854475\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.568013 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-oauth-serving-cert\") pod \"bf03a732-e32e-410a-ae17-1573a2854475\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.568039 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf03a732-e32e-410a-ae17-1573a2854475-console-serving-cert\") pod \"bf03a732-e32e-410a-ae17-1573a2854475\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.568106 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-console-config\") pod \"bf03a732-e32e-410a-ae17-1573a2854475\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.568153 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-trusted-ca-bundle\") pod \"bf03a732-e32e-410a-ae17-1573a2854475\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.568240 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-service-ca\") pod \"bf03a732-e32e-410a-ae17-1573a2854475\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.569221 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-service-ca" (OuterVolumeSpecName: "service-ca") pod "bf03a732-e32e-410a-ae17-1573a2854475" (UID: "bf03a732-e32e-410a-ae17-1573a2854475"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.569258 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-console-config" (OuterVolumeSpecName: "console-config") pod "bf03a732-e32e-410a-ae17-1573a2854475" (UID: "bf03a732-e32e-410a-ae17-1573a2854475"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.569566 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "bf03a732-e32e-410a-ae17-1573a2854475" (UID: "bf03a732-e32e-410a-ae17-1573a2854475"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.569773 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "bf03a732-e32e-410a-ae17-1573a2854475" (UID: "bf03a732-e32e-410a-ae17-1573a2854475"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.575528 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf03a732-e32e-410a-ae17-1573a2854475-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "bf03a732-e32e-410a-ae17-1573a2854475" (UID: "bf03a732-e32e-410a-ae17-1573a2854475"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.575501 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf03a732-e32e-410a-ae17-1573a2854475-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "bf03a732-e32e-410a-ae17-1573a2854475" (UID: "bf03a732-e32e-410a-ae17-1573a2854475"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.584834 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf03a732-e32e-410a-ae17-1573a2854475-kube-api-access-zn5wq" (OuterVolumeSpecName: "kube-api-access-zn5wq") pod "bf03a732-e32e-410a-ae17-1573a2854475" (UID: "bf03a732-e32e-410a-ae17-1573a2854475"). InnerVolumeSpecName "kube-api-access-zn5wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.670271 4751 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.670474 4751 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf03a732-e32e-410a-ae17-1573a2854475-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.670549 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn5wq\" (UniqueName: \"kubernetes.io/projected/bf03a732-e32e-410a-ae17-1573a2854475-kube-api-access-zn5wq\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.670605 4751 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.670657 4751 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf03a732-e32e-410a-ae17-1573a2854475-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.670707 4751 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.670758 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.912610 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-r8wrn"] Jan 30 21:36:53 crc kubenswrapper[4751]: E0130 21:36:53.913005 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf03a732-e32e-410a-ae17-1573a2854475" containerName="console" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.913023 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf03a732-e32e-410a-ae17-1573a2854475" containerName="console" Jan 30 21:36:53 crc kubenswrapper[4751]: E0130 21:36:53.913041 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" containerName="init" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.913047 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" containerName="init" Jan 30 21:36:53 crc kubenswrapper[4751]: E0130 21:36:53.913061 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" containerName="dnsmasq-dns" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.913068 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" containerName="dnsmasq-dns" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.913253 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" containerName="dnsmasq-dns" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.913272 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf03a732-e32e-410a-ae17-1573a2854475" containerName="console" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.913889 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.917098 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.917695 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qdcvb" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.933062 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-r8wrn"] Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.037626 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" path="/var/lib/kubelet/pods/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809/volumes" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.093429 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-db-sync-config-data\") pod \"glance-db-sync-r8wrn\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.093648 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-combined-ca-bundle\") pod \"glance-db-sync-r8wrn\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.093890 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-config-data\") pod \"glance-db-sync-r8wrn\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.094053 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcdb2\" (UniqueName: \"kubernetes.io/projected/32a93444-0221-40b7-9869-428788112ae2-kube-api-access-gcdb2\") pod \"glance-db-sync-r8wrn\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.102091 4751 generic.go:334] "Generic (PLEG): container finished" podID="1f2ec939-8595-4611-a636-a46fffaa8ebf" containerID="a4c6ded6bcebfebc69538cc39c7d557a4894403ba3b9a46406bf7a54b2fb9107" exitCode=0 Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.102167 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" event={"ID":"1f2ec939-8595-4611-a636-a46fffaa8ebf","Type":"ContainerDied","Data":"a4c6ded6bcebfebc69538cc39c7d557a4894403ba3b9a46406bf7a54b2fb9107"} Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.104566 4751 generic.go:334] "Generic (PLEG): container finished" podID="d36824ca-c5a8-4514-9276-e49126a66018" containerID="6d875e7e116aae53e17490ae6ad2fcbb1e85d4c6ca0051daa64edc6f242dd628" exitCode=0 Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.104641 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" event={"ID":"d36824ca-c5a8-4514-9276-e49126a66018","Type":"ContainerDied","Data":"6d875e7e116aae53e17490ae6ad2fcbb1e85d4c6ca0051daa64edc6f242dd628"} Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.107054 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b64b75d5d-kgc46_bf03a732-e32e-410a-ae17-1573a2854475/console/0.log" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.107233 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b64b75d5d-kgc46" event={"ID":"bf03a732-e32e-410a-ae17-1573a2854475","Type":"ContainerDied","Data":"940073f1b9050f0a93c1aa8e842c9477fdedfec5ed669f60a6eea0cf2c8dd11a"} Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.107284 4751 scope.go:117] "RemoveContainer" containerID="bddd330cf13a903e94930cf7c65192196ece6d61e6ec543ac96c6b64e5e23194" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.108846 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.179891 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6b64b75d5d-kgc46"] Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.189365 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6b64b75d5d-kgc46"] Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.195699 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcdb2\" (UniqueName: \"kubernetes.io/projected/32a93444-0221-40b7-9869-428788112ae2-kube-api-access-gcdb2\") pod \"glance-db-sync-r8wrn\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.195777 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-db-sync-config-data\") pod \"glance-db-sync-r8wrn\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.195866 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-combined-ca-bundle\") pod \"glance-db-sync-r8wrn\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.195985 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-config-data\") pod \"glance-db-sync-r8wrn\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.200635 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-combined-ca-bundle\") pod \"glance-db-sync-r8wrn\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.200879 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-config-data\") pod \"glance-db-sync-r8wrn\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.201697 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-db-sync-config-data\") pod \"glance-db-sync-r8wrn\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.212310 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcdb2\" (UniqueName: \"kubernetes.io/projected/32a93444-0221-40b7-9869-428788112ae2-kube-api-access-gcdb2\") pod \"glance-db-sync-r8wrn\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.229946 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.872314 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-r8wrn"] Jan 30 21:36:54 crc kubenswrapper[4751]: W0130 21:36:54.877172 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32a93444_0221_40b7_9869_428788112ae2.slice/crio-7a175ccea97ecde04362b48b6632c136c6a64e0161ec0c91d00f4d408467a89a WatchSource:0}: Error finding container 7a175ccea97ecde04362b48b6632c136c6a64e0161ec0c91d00f4d408467a89a: Status 404 returned error can't find the container with id 7a175ccea97ecde04362b48b6632c136c6a64e0161ec0c91d00f4d408467a89a Jan 30 21:36:55 crc kubenswrapper[4751]: I0130 21:36:55.155413 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r8wrn" event={"ID":"32a93444-0221-40b7-9869-428788112ae2","Type":"ContainerStarted","Data":"7a175ccea97ecde04362b48b6632c136c6a64e0161ec0c91d00f4d408467a89a"} Jan 30 21:36:55 crc kubenswrapper[4751]: I0130 21:36:55.988261 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf03a732-e32e-410a-ae17-1573a2854475" path="/var/lib/kubelet/pods/bf03a732-e32e-410a-ae17-1573a2854475/volumes" Jan 30 21:36:56 crc kubenswrapper[4751]: I0130 21:36:56.769782 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-2xmqz"] Jan 30 21:36:56 crc kubenswrapper[4751]: I0130 21:36:56.771221 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2xmqz" Jan 30 21:36:56 crc kubenswrapper[4751]: I0130 21:36:56.773431 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 21:36:56 crc kubenswrapper[4751]: I0130 21:36:56.778537 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2xmqz"] Jan 30 21:36:56 crc kubenswrapper[4751]: I0130 21:36:56.963867 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27ed4323-ecba-4f90-b7ea-a5a0ff7713d6-operator-scripts\") pod \"root-account-create-update-2xmqz\" (UID: \"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6\") " pod="openstack/root-account-create-update-2xmqz" Jan 30 21:36:56 crc kubenswrapper[4751]: I0130 21:36:56.963950 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcsnv\" (UniqueName: \"kubernetes.io/projected/27ed4323-ecba-4f90-b7ea-a5a0ff7713d6-kube-api-access-qcsnv\") pod \"root-account-create-update-2xmqz\" (UID: \"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6\") " pod="openstack/root-account-create-update-2xmqz" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.065833 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27ed4323-ecba-4f90-b7ea-a5a0ff7713d6-operator-scripts\") pod \"root-account-create-update-2xmqz\" (UID: \"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6\") " pod="openstack/root-account-create-update-2xmqz" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.065888 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcsnv\" (UniqueName: \"kubernetes.io/projected/27ed4323-ecba-4f90-b7ea-a5a0ff7713d6-kube-api-access-qcsnv\") pod \"root-account-create-update-2xmqz\" (UID: \"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6\") " pod="openstack/root-account-create-update-2xmqz" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.066679 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27ed4323-ecba-4f90-b7ea-a5a0ff7713d6-operator-scripts\") pod \"root-account-create-update-2xmqz\" (UID: \"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6\") " pod="openstack/root-account-create-update-2xmqz" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.089524 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcsnv\" (UniqueName: \"kubernetes.io/projected/27ed4323-ecba-4f90-b7ea-a5a0ff7713d6-kube-api-access-qcsnv\") pod \"root-account-create-update-2xmqz\" (UID: \"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6\") " pod="openstack/root-account-create-update-2xmqz" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.091119 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2xmqz" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.187977 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.193995 4751 generic.go:334] "Generic (PLEG): container finished" podID="70af95fb-5ca8-4482-a1bc-81b1891e0da7" containerID="d406940fb6742e9578d31d784c5dc7b728af135cada6bb76ae850b8c64dbd1f2" exitCode=0 Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.194062 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vvq25" event={"ID":"70af95fb-5ca8-4482-a1bc-81b1891e0da7","Type":"ContainerDied","Data":"d406940fb6742e9578d31d784c5dc7b728af135cada6bb76ae850b8c64dbd1f2"} Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.197920 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" event={"ID":"d36824ca-c5a8-4514-9276-e49126a66018","Type":"ContainerDied","Data":"bfc01980f997333ad874c465402aadd998785e21bd958f0233d9e3ee82f2fd2d"} Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.197959 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfc01980f997333ad874c465402aadd998785e21bd958f0233d9e3ee82f2fd2d" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.197941 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.200242 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" event={"ID":"1f2ec939-8595-4611-a636-a46fffaa8ebf","Type":"ContainerDied","Data":"81a17dec6aef6b0552f0eafd7b80b580f1d50832eef7c25f9ef93e23411b5b8e"} Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.200265 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81a17dec6aef6b0552f0eafd7b80b580f1d50832eef7c25f9ef93e23411b5b8e" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.224044 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.376426 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbqbm\" (UniqueName: \"kubernetes.io/projected/1f2ec939-8595-4611-a636-a46fffaa8ebf-kube-api-access-nbqbm\") pod \"1f2ec939-8595-4611-a636-a46fffaa8ebf\" (UID: \"1f2ec939-8595-4611-a636-a46fffaa8ebf\") " Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.376512 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d36824ca-c5a8-4514-9276-e49126a66018-operator-scripts\") pod \"d36824ca-c5a8-4514-9276-e49126a66018\" (UID: \"d36824ca-c5a8-4514-9276-e49126a66018\") " Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.376556 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f2ec939-8595-4611-a636-a46fffaa8ebf-operator-scripts\") pod \"1f2ec939-8595-4611-a636-a46fffaa8ebf\" (UID: \"1f2ec939-8595-4611-a636-a46fffaa8ebf\") " Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.376791 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbcl8\" (UniqueName: \"kubernetes.io/projected/d36824ca-c5a8-4514-9276-e49126a66018-kube-api-access-cbcl8\") pod \"d36824ca-c5a8-4514-9276-e49126a66018\" (UID: \"d36824ca-c5a8-4514-9276-e49126a66018\") " Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.378442 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d36824ca-c5a8-4514-9276-e49126a66018-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d36824ca-c5a8-4514-9276-e49126a66018" (UID: "d36824ca-c5a8-4514-9276-e49126a66018"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.378733 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f2ec939-8595-4611-a636-a46fffaa8ebf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f2ec939-8595-4611-a636-a46fffaa8ebf" (UID: "1f2ec939-8595-4611-a636-a46fffaa8ebf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.383815 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d36824ca-c5a8-4514-9276-e49126a66018-kube-api-access-cbcl8" (OuterVolumeSpecName: "kube-api-access-cbcl8") pod "d36824ca-c5a8-4514-9276-e49126a66018" (UID: "d36824ca-c5a8-4514-9276-e49126a66018"). InnerVolumeSpecName "kube-api-access-cbcl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.387665 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f2ec939-8595-4611-a636-a46fffaa8ebf-kube-api-access-nbqbm" (OuterVolumeSpecName: "kube-api-access-nbqbm") pod "1f2ec939-8595-4611-a636-a46fffaa8ebf" (UID: "1f2ec939-8595-4611-a636-a46fffaa8ebf"). InnerVolumeSpecName "kube-api-access-nbqbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.480362 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbcl8\" (UniqueName: \"kubernetes.io/projected/d36824ca-c5a8-4514-9276-e49126a66018-kube-api-access-cbcl8\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.480397 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbqbm\" (UniqueName: \"kubernetes.io/projected/1f2ec939-8595-4611-a636-a46fffaa8ebf-kube-api-access-nbqbm\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.480407 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d36824ca-c5a8-4514-9276-e49126a66018-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.480415 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f2ec939-8595-4611-a636-a46fffaa8ebf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.712200 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2xmqz"] Jan 30 21:36:57 crc kubenswrapper[4751]: W0130 21:36:57.718106 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27ed4323_ecba_4f90_b7ea_a5a0ff7713d6.slice/crio-89532bad97a6f7064b4010584f795d1f8d343d93067541a7bf6a67ea4dbcd25d WatchSource:0}: Error finding container 89532bad97a6f7064b4010584f795d1f8d343d93067541a7bf6a67ea4dbcd25d: Status 404 returned error can't find the container with id 89532bad97a6f7064b4010584f795d1f8d343d93067541a7bf6a67ea4dbcd25d Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.212151 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d56430b1-227c-4074-8d43-86953ab9f911","Type":"ContainerStarted","Data":"a9ca9e07790f5d346c1a4232c0516dbab0611a34ac86ef5489631c5577ce240b"} Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.215272 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2xmqz" event={"ID":"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6","Type":"ContainerStarted","Data":"0b6fa7471c097e1e891323221bf11b4c99ace89cd782cbdf349cf6bb9189e783"} Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.215453 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2xmqz" event={"ID":"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6","Type":"ContainerStarted","Data":"89532bad97a6f7064b4010584f795d1f8d343d93067541a7bf6a67ea4dbcd25d"} Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.215476 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.252972 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=25.761409495 podStartE2EDuration="1m3.252955772s" podCreationTimestamp="2026-01-30 21:35:55 +0000 UTC" firstStartedPulling="2026-01-30 21:36:19.593636791 +0000 UTC m=+1318.339459460" lastFinishedPulling="2026-01-30 21:36:57.085183098 +0000 UTC m=+1355.831005737" observedRunningTime="2026-01-30 21:36:58.247930538 +0000 UTC m=+1356.993753187" watchObservedRunningTime="2026-01-30 21:36:58.252955772 +0000 UTC m=+1356.998778421" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.640935 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-g9s48" podUID="fbc382fd-1513-4137-b801-5627cc5886ea" containerName="ovn-controller" probeResult="failure" output=< Jan 30 21:36:58 crc kubenswrapper[4751]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 21:36:58 crc kubenswrapper[4751]: > Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.701841 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.710702 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.732124 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.804665 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70af95fb-5ca8-4482-a1bc-81b1891e0da7-scripts\") pod \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.804849 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-combined-ca-bundle\") pod \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.804900 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/70af95fb-5ca8-4482-a1bc-81b1891e0da7-etc-swift\") pod \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.804949 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/70af95fb-5ca8-4482-a1bc-81b1891e0da7-ring-data-devices\") pod \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.805029 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlmhc\" (UniqueName: \"kubernetes.io/projected/70af95fb-5ca8-4482-a1bc-81b1891e0da7-kube-api-access-mlmhc\") pod \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.805085 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-dispersionconf\") pod \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.805132 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-swiftconf\") pod \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.805734 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70af95fb-5ca8-4482-a1bc-81b1891e0da7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "70af95fb-5ca8-4482-a1bc-81b1891e0da7" (UID: "70af95fb-5ca8-4482-a1bc-81b1891e0da7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.805882 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70af95fb-5ca8-4482-a1bc-81b1891e0da7-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "70af95fb-5ca8-4482-a1bc-81b1891e0da7" (UID: "70af95fb-5ca8-4482-a1bc-81b1891e0da7"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.806183 4751 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/70af95fb-5ca8-4482-a1bc-81b1891e0da7-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.806201 4751 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/70af95fb-5ca8-4482-a1bc-81b1891e0da7-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.809827 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70af95fb-5ca8-4482-a1bc-81b1891e0da7-kube-api-access-mlmhc" (OuterVolumeSpecName: "kube-api-access-mlmhc") pod "70af95fb-5ca8-4482-a1bc-81b1891e0da7" (UID: "70af95fb-5ca8-4482-a1bc-81b1891e0da7"). InnerVolumeSpecName "kube-api-access-mlmhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.814767 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "70af95fb-5ca8-4482-a1bc-81b1891e0da7" (UID: "70af95fb-5ca8-4482-a1bc-81b1891e0da7"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.841271 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "70af95fb-5ca8-4482-a1bc-81b1891e0da7" (UID: "70af95fb-5ca8-4482-a1bc-81b1891e0da7"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.842278 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70af95fb-5ca8-4482-a1bc-81b1891e0da7-scripts" (OuterVolumeSpecName: "scripts") pod "70af95fb-5ca8-4482-a1bc-81b1891e0da7" (UID: "70af95fb-5ca8-4482-a1bc-81b1891e0da7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.868972 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70af95fb-5ca8-4482-a1bc-81b1891e0da7" (UID: "70af95fb-5ca8-4482-a1bc-81b1891e0da7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.907931 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70af95fb-5ca8-4482-a1bc-81b1891e0da7-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.907960 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.907971 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlmhc\" (UniqueName: \"kubernetes.io/projected/70af95fb-5ca8-4482-a1bc-81b1891e0da7-kube-api-access-mlmhc\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.907979 4751 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.907989 4751 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.957111 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-g9s48-config-hs6bc"] Jan 30 21:36:58 crc kubenswrapper[4751]: E0130 21:36:58.957492 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36824ca-c5a8-4514-9276-e49126a66018" containerName="mariadb-database-create" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.957512 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36824ca-c5a8-4514-9276-e49126a66018" containerName="mariadb-database-create" Jan 30 21:36:58 crc kubenswrapper[4751]: E0130 21:36:58.957542 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f2ec939-8595-4611-a636-a46fffaa8ebf" containerName="mariadb-account-create-update" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.957549 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2ec939-8595-4611-a636-a46fffaa8ebf" containerName="mariadb-account-create-update" Jan 30 21:36:58 crc kubenswrapper[4751]: E0130 21:36:58.957564 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70af95fb-5ca8-4482-a1bc-81b1891e0da7" containerName="swift-ring-rebalance" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.957570 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="70af95fb-5ca8-4482-a1bc-81b1891e0da7" containerName="swift-ring-rebalance" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.959862 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d36824ca-c5a8-4514-9276-e49126a66018" containerName="mariadb-database-create" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.959887 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="70af95fb-5ca8-4482-a1bc-81b1891e0da7" containerName="swift-ring-rebalance" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.959920 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f2ec939-8595-4611-a636-a46fffaa8ebf" containerName="mariadb-account-create-update" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.960641 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.964034 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.972123 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-g9s48-config-hs6bc"] Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.016430 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-run\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.016532 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-log-ovn\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.016606 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-run-ovn\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.016670 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-scripts\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.016709 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jznl\" (UniqueName: \"kubernetes.io/projected/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-kube-api-access-7jznl\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.016726 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-additional-scripts\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.118277 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-scripts\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.118390 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jznl\" (UniqueName: \"kubernetes.io/projected/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-kube-api-access-7jznl\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.118417 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-additional-scripts\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.118492 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-run\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.118586 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-log-ovn\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.118651 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-run-ovn\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.118817 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-run\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.118857 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-log-ovn\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.118859 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-run-ovn\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.119539 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-additional-scripts\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.120475 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-scripts\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.135977 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jznl\" (UniqueName: \"kubernetes.io/projected/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-kube-api-access-7jznl\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.228374 4751 generic.go:334] "Generic (PLEG): container finished" podID="27ed4323-ecba-4f90-b7ea-a5a0ff7713d6" containerID="0b6fa7471c097e1e891323221bf11b4c99ace89cd782cbdf349cf6bb9189e783" exitCode=0 Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.228477 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2xmqz" event={"ID":"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6","Type":"ContainerDied","Data":"0b6fa7471c097e1e891323221bf11b4c99ace89cd782cbdf349cf6bb9189e783"} Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.231974 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vvq25" event={"ID":"70af95fb-5ca8-4482-a1bc-81b1891e0da7","Type":"ContainerDied","Data":"68907368515e04efc423e96f6ad0f34c1d76a72cb81074b939310269d488cbe8"} Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.232000 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68907368515e04efc423e96f6ad0f34c1d76a72cb81074b939310269d488cbe8" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.232031 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.291157 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.790584 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2xmqz" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.834831 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcsnv\" (UniqueName: \"kubernetes.io/projected/27ed4323-ecba-4f90-b7ea-a5a0ff7713d6-kube-api-access-qcsnv\") pod \"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6\" (UID: \"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6\") " Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.835068 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27ed4323-ecba-4f90-b7ea-a5a0ff7713d6-operator-scripts\") pod \"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6\" (UID: \"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6\") " Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.835462 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27ed4323-ecba-4f90-b7ea-a5a0ff7713d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "27ed4323-ecba-4f90-b7ea-a5a0ff7713d6" (UID: "27ed4323-ecba-4f90-b7ea-a5a0ff7713d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.835969 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27ed4323-ecba-4f90-b7ea-a5a0ff7713d6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.839981 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27ed4323-ecba-4f90-b7ea-a5a0ff7713d6-kube-api-access-qcsnv" (OuterVolumeSpecName: "kube-api-access-qcsnv") pod "27ed4323-ecba-4f90-b7ea-a5a0ff7713d6" (UID: "27ed4323-ecba-4f90-b7ea-a5a0ff7713d6"). InnerVolumeSpecName "kube-api-access-qcsnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.937452 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcsnv\" (UniqueName: \"kubernetes.io/projected/27ed4323-ecba-4f90-b7ea-a5a0ff7713d6-kube-api-access-qcsnv\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.946174 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-g9s48-config-hs6bc"] Jan 30 21:36:59 crc kubenswrapper[4751]: W0130 21:36:59.949451 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9572d2c_ecb1_4249_9cc0_9a3881e6960c.slice/crio-d243dff1baf5f12ba2f921865d6322fa5c9ae08ac922a4422f0173c54010fe48 WatchSource:0}: Error finding container d243dff1baf5f12ba2f921865d6322fa5c9ae08ac922a4422f0173c54010fe48: Status 404 returned error can't find the container with id d243dff1baf5f12ba2f921865d6322fa5c9ae08ac922a4422f0173c54010fe48 Jan 30 21:37:00 crc kubenswrapper[4751]: I0130 21:37:00.262274 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g9s48-config-hs6bc" event={"ID":"d9572d2c-ecb1-4249-9cc0-9a3881e6960c","Type":"ContainerStarted","Data":"d243dff1baf5f12ba2f921865d6322fa5c9ae08ac922a4422f0173c54010fe48"} Jan 30 21:37:00 crc kubenswrapper[4751]: I0130 21:37:00.277645 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2xmqz" event={"ID":"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6","Type":"ContainerDied","Data":"89532bad97a6f7064b4010584f795d1f8d343d93067541a7bf6a67ea4dbcd25d"} Jan 30 21:37:00 crc kubenswrapper[4751]: I0130 21:37:00.277713 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89532bad97a6f7064b4010584f795d1f8d343d93067541a7bf6a67ea4dbcd25d" Jan 30 21:37:00 crc kubenswrapper[4751]: I0130 21:37:00.277723 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2xmqz" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.256017 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 30 21:37:01 crc kubenswrapper[4751]: E0130 21:37:01.256437 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27ed4323-ecba-4f90-b7ea-a5a0ff7713d6" containerName="mariadb-account-create-update" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.256454 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="27ed4323-ecba-4f90-b7ea-a5a0ff7713d6" containerName="mariadb-account-create-update" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.256637 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="27ed4323-ecba-4f90-b7ea-a5a0ff7713d6" containerName="mariadb-account-create-update" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.257280 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.268175 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.284265 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.293576 4751 generic.go:334] "Generic (PLEG): container finished" podID="d9572d2c-ecb1-4249-9cc0-9a3881e6960c" containerID="8ab084e559e8069a5cdd46d2514468a22129fd354769c2604ada982fbc95ae13" exitCode=0 Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.293662 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g9s48-config-hs6bc" event={"ID":"d9572d2c-ecb1-4249-9cc0-9a3881e6960c","Type":"ContainerDied","Data":"8ab084e559e8069a5cdd46d2514468a22129fd354769c2604ada982fbc95ae13"} Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.361843 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwtvg\" (UniqueName: \"kubernetes.io/projected/0ea4b0a2-4b62-47b1-b925-f78af9c42125-kube-api-access-lwtvg\") pod \"mysqld-exporter-0\" (UID: \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\") " pod="openstack/mysqld-exporter-0" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.362043 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea4b0a2-4b62-47b1-b925-f78af9c42125-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\") " pod="openstack/mysqld-exporter-0" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.362075 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea4b0a2-4b62-47b1-b925-f78af9c42125-config-data\") pod \"mysqld-exporter-0\" (UID: \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\") " pod="openstack/mysqld-exporter-0" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.464323 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea4b0a2-4b62-47b1-b925-f78af9c42125-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\") " pod="openstack/mysqld-exporter-0" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.464675 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea4b0a2-4b62-47b1-b925-f78af9c42125-config-data\") pod \"mysqld-exporter-0\" (UID: \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\") " pod="openstack/mysqld-exporter-0" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.464742 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwtvg\" (UniqueName: \"kubernetes.io/projected/0ea4b0a2-4b62-47b1-b925-f78af9c42125-kube-api-access-lwtvg\") pod \"mysqld-exporter-0\" (UID: \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\") " pod="openstack/mysqld-exporter-0" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.472991 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea4b0a2-4b62-47b1-b925-f78af9c42125-config-data\") pod \"mysqld-exporter-0\" (UID: \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\") " pod="openstack/mysqld-exporter-0" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.473885 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea4b0a2-4b62-47b1-b925-f78af9c42125-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\") " pod="openstack/mysqld-exporter-0" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.482913 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwtvg\" (UniqueName: \"kubernetes.io/projected/0ea4b0a2-4b62-47b1-b925-f78af9c42125-kube-api-access-lwtvg\") pod \"mysqld-exporter-0\" (UID: \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\") " pod="openstack/mysqld-exporter-0" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.577192 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.080218 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.130199 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 30 21:37:02 crc kubenswrapper[4751]: W0130 21:37:02.133501 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ea4b0a2_4b62_47b1_b925_f78af9c42125.slice/crio-de8e457ae0f4068038c3e6dd30bdd6296bb65bd86e565ea48c1e280f1358b506 WatchSource:0}: Error finding container de8e457ae0f4068038c3e6dd30bdd6296bb65bd86e565ea48c1e280f1358b506: Status 404 returned error can't find the container with id de8e457ae0f4068038c3e6dd30bdd6296bb65bd86e565ea48c1e280f1358b506 Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.304831 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"0ea4b0a2-4b62-47b1-b925-f78af9c42125","Type":"ContainerStarted","Data":"de8e457ae0f4068038c3e6dd30bdd6296bb65bd86e565ea48c1e280f1358b506"} Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.715967 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.803563 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jznl\" (UniqueName: \"kubernetes.io/projected/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-kube-api-access-7jznl\") pod \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.803670 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-log-ovn\") pod \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.803714 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-additional-scripts\") pod \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.803744 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-scripts\") pod \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.803786 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-run-ovn\") pod \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.803854 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-run\") pod \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.805032 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-run" (OuterVolumeSpecName: "var-run") pod "d9572d2c-ecb1-4249-9cc0-9a3881e6960c" (UID: "d9572d2c-ecb1-4249-9cc0-9a3881e6960c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.806874 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "d9572d2c-ecb1-4249-9cc0-9a3881e6960c" (UID: "d9572d2c-ecb1-4249-9cc0-9a3881e6960c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.806935 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "d9572d2c-ecb1-4249-9cc0-9a3881e6960c" (UID: "d9572d2c-ecb1-4249-9cc0-9a3881e6960c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.807398 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "d9572d2c-ecb1-4249-9cc0-9a3881e6960c" (UID: "d9572d2c-ecb1-4249-9cc0-9a3881e6960c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.807733 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-scripts" (OuterVolumeSpecName: "scripts") pod "d9572d2c-ecb1-4249-9cc0-9a3881e6960c" (UID: "d9572d2c-ecb1-4249-9cc0-9a3881e6960c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.834757 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-kube-api-access-7jznl" (OuterVolumeSpecName: "kube-api-access-7jznl") pod "d9572d2c-ecb1-4249-9cc0-9a3881e6960c" (UID: "d9572d2c-ecb1-4249-9cc0-9a3881e6960c"). InnerVolumeSpecName "kube-api-access-7jznl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.916594 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jznl\" (UniqueName: \"kubernetes.io/projected/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-kube-api-access-7jznl\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.916798 4751 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.916858 4751 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.916913 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.916976 4751 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.917034 4751 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:03 crc kubenswrapper[4751]: I0130 21:37:03.332901 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g9s48-config-hs6bc" event={"ID":"d9572d2c-ecb1-4249-9cc0-9a3881e6960c","Type":"ContainerDied","Data":"d243dff1baf5f12ba2f921865d6322fa5c9ae08ac922a4422f0173c54010fe48"} Jan 30 21:37:03 crc kubenswrapper[4751]: I0130 21:37:03.332949 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d243dff1baf5f12ba2f921865d6322fa5c9ae08ac922a4422f0173c54010fe48" Jan 30 21:37:03 crc kubenswrapper[4751]: I0130 21:37:03.333025 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:37:03 crc kubenswrapper[4751]: I0130 21:37:03.659311 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-g9s48" Jan 30 21:37:03 crc kubenswrapper[4751]: I0130 21:37:03.828880 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-g9s48-config-hs6bc"] Jan 30 21:37:03 crc kubenswrapper[4751]: I0130 21:37:03.839756 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-g9s48-config-hs6bc"] Jan 30 21:37:03 crc kubenswrapper[4751]: I0130 21:37:03.989942 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9572d2c-ecb1-4249-9cc0-9a3881e6960c" path="/var/lib/kubelet/pods/d9572d2c-ecb1-4249-9cc0-9a3881e6960c/volumes" Jan 30 21:37:08 crc kubenswrapper[4751]: I0130 21:37:08.545155 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:37:08 crc kubenswrapper[4751]: I0130 21:37:08.583571 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:37:08 crc kubenswrapper[4751]: I0130 21:37:08.772151 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 21:37:10 crc kubenswrapper[4751]: I0130 21:37:10.032358 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f18b5d57-5b05-4ef0-bae3-68938e094510" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.134:5671: connect: connection refused" Jan 30 21:37:10 crc kubenswrapper[4751]: I0130 21:37:10.392213 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="192a5913-0c28-4214-9ac0-d37ca2eeb34c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.135:5671: connect: connection refused" Jan 30 21:37:10 crc kubenswrapper[4751]: I0130 21:37:10.406962 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="2ed6288f-1f28-4189-a452-10ed3fa78c7f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.136:5671: connect: connection refused" Jan 30 21:37:10 crc kubenswrapper[4751]: I0130 21:37:10.431632 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:37:11 crc kubenswrapper[4751]: I0130 21:37:11.408184 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 21:37:11 crc kubenswrapper[4751]: W0130 21:37:11.757581 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f6a1442_f7f7_499a_a7d5_c354d76ba9d5.slice/crio-360eed0d27f0ed842a5c448ba4083b3bebbf3ca8bf8301e59f56394be2cdb774 WatchSource:0}: Error finding container 360eed0d27f0ed842a5c448ba4083b3bebbf3ca8bf8301e59f56394be2cdb774: Status 404 returned error can't find the container with id 360eed0d27f0ed842a5c448ba4083b3bebbf3ca8bf8301e59f56394be2cdb774 Jan 30 21:37:12 crc kubenswrapper[4751]: I0130 21:37:12.080675 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:12 crc kubenswrapper[4751]: I0130 21:37:12.083061 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:12 crc kubenswrapper[4751]: I0130 21:37:12.286346 4751 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod6f39785c-2919-4c29-8405-fd314710c587"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod6f39785c-2919-4c29-8405-fd314710c587] : Timed out while waiting for systemd to remove kubepods-besteffort-pod6f39785c_2919_4c29_8405_fd314710c587.slice" Jan 30 21:37:12 crc kubenswrapper[4751]: E0130 21:37:12.286406 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod6f39785c-2919-4c29-8405-fd314710c587] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod6f39785c-2919-4c29-8405-fd314710c587] : Timed out while waiting for systemd to remove kubepods-besteffort-pod6f39785c_2919_4c29_8405_fd314710c587.slice" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" podUID="6f39785c-2919-4c29-8405-fd314710c587" Jan 30 21:37:12 crc kubenswrapper[4751]: I0130 21:37:12.439710 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r8wrn" event={"ID":"32a93444-0221-40b7-9869-428788112ae2","Type":"ContainerStarted","Data":"291bba4a83a01e50f5b8260a72f7589443ccaf7ad2482ecfd294e283e08c6b24"} Jan 30 21:37:12 crc kubenswrapper[4751]: I0130 21:37:12.442443 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"360eed0d27f0ed842a5c448ba4083b3bebbf3ca8bf8301e59f56394be2cdb774"} Jan 30 21:37:12 crc kubenswrapper[4751]: I0130 21:37:12.444545 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"0ea4b0a2-4b62-47b1-b925-f78af9c42125","Type":"ContainerStarted","Data":"20dafdd7e367671986fd5ba74e2e896a728dd40248873c547c64ef7943928472"} Jan 30 21:37:12 crc kubenswrapper[4751]: I0130 21:37:12.444578 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:37:12 crc kubenswrapper[4751]: I0130 21:37:12.445969 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:12 crc kubenswrapper[4751]: I0130 21:37:12.471304 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-r8wrn" podStartSLOduration=3.498873763 podStartE2EDuration="19.471286814s" podCreationTimestamp="2026-01-30 21:36:53 +0000 UTC" firstStartedPulling="2026-01-30 21:36:54.879721492 +0000 UTC m=+1353.625544141" lastFinishedPulling="2026-01-30 21:37:10.852134543 +0000 UTC m=+1369.597957192" observedRunningTime="2026-01-30 21:37:12.457776023 +0000 UTC m=+1371.203598682" watchObservedRunningTime="2026-01-30 21:37:12.471286814 +0000 UTC m=+1371.217109463" Jan 30 21:37:12 crc kubenswrapper[4751]: I0130 21:37:12.488443 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=1.8059720270000001 podStartE2EDuration="11.4883974s" podCreationTimestamp="2026-01-30 21:37:01 +0000 UTC" firstStartedPulling="2026-01-30 21:37:02.135888205 +0000 UTC m=+1360.881710854" lastFinishedPulling="2026-01-30 21:37:11.818313568 +0000 UTC m=+1370.564136227" observedRunningTime="2026-01-30 21:37:12.475921097 +0000 UTC m=+1371.221743746" watchObservedRunningTime="2026-01-30 21:37:12.4883974 +0000 UTC m=+1371.234220049" Jan 30 21:37:12 crc kubenswrapper[4751]: I0130 21:37:12.549132 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-gcttq"] Jan 30 21:37:12 crc kubenswrapper[4751]: I0130 21:37:12.560927 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-gcttq"] Jan 30 21:37:13 crc kubenswrapper[4751]: I0130 21:37:13.469637 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"7a4a522d1102a82b6cc3bc3de1b26131ae54a72e2aa27d700bc820cb187224ad"} Jan 30 21:37:13 crc kubenswrapper[4751]: I0130 21:37:13.988823 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f39785c-2919-4c29-8405-fd314710c587" path="/var/lib/kubelet/pods/6f39785c-2919-4c29-8405-fd314710c587/volumes" Jan 30 21:37:14 crc kubenswrapper[4751]: I0130 21:37:14.481246 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"8571cc1450ceb2e32cfddbd2edae931c75b4d6f0afbf409ef929fc9df73ad2dc"} Jan 30 21:37:14 crc kubenswrapper[4751]: I0130 21:37:14.481300 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"fbb5a6849ee00d84a95512bbd43bb47ae034c963c9a72669a99f85048a502147"} Jan 30 21:37:14 crc kubenswrapper[4751]: I0130 21:37:14.481314 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"894135ba89b7e160a219d64268b79c0eb9ca3709aa94a1511447855dad625fe3"} Jan 30 21:37:14 crc kubenswrapper[4751]: I0130 21:37:14.647501 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 21:37:14 crc kubenswrapper[4751]: I0130 21:37:14.647895 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="prometheus" containerID="cri-o://5e628f84f2b1e0bad7fea4bb8e7b42341154ce8e229a8f36477102accdee0cfb" gracePeriod=600 Jan 30 21:37:14 crc kubenswrapper[4751]: I0130 21:37:14.648024 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="thanos-sidecar" containerID="cri-o://a9ca9e07790f5d346c1a4232c0516dbab0611a34ac86ef5489631c5577ce240b" gracePeriod=600 Jan 30 21:37:14 crc kubenswrapper[4751]: I0130 21:37:14.647974 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="config-reloader" containerID="cri-o://28f17c906b227a5af5cc4ace126e147801603741df157b46ecf7a45821fe1d9f" gracePeriod=600 Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.517098 4751 generic.go:334] "Generic (PLEG): container finished" podID="d56430b1-227c-4074-8d43-86953ab9f911" containerID="a9ca9e07790f5d346c1a4232c0516dbab0611a34ac86ef5489631c5577ce240b" exitCode=0 Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.517588 4751 generic.go:334] "Generic (PLEG): container finished" podID="d56430b1-227c-4074-8d43-86953ab9f911" containerID="28f17c906b227a5af5cc4ace126e147801603741df157b46ecf7a45821fe1d9f" exitCode=0 Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.517596 4751 generic.go:334] "Generic (PLEG): container finished" podID="d56430b1-227c-4074-8d43-86953ab9f911" containerID="5e628f84f2b1e0bad7fea4bb8e7b42341154ce8e229a8f36477102accdee0cfb" exitCode=0 Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.517138 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d56430b1-227c-4074-8d43-86953ab9f911","Type":"ContainerDied","Data":"a9ca9e07790f5d346c1a4232c0516dbab0611a34ac86ef5489631c5577ce240b"} Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.517629 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d56430b1-227c-4074-8d43-86953ab9f911","Type":"ContainerDied","Data":"28f17c906b227a5af5cc4ace126e147801603741df157b46ecf7a45821fe1d9f"} Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.517640 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d56430b1-227c-4074-8d43-86953ab9f911","Type":"ContainerDied","Data":"5e628f84f2b1e0bad7fea4bb8e7b42341154ce8e229a8f36477102accdee0cfb"} Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.777695 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.923673 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d56430b1-227c-4074-8d43-86953ab9f911-tls-assets\") pod \"d56430b1-227c-4074-8d43-86953ab9f911\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.924129 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-1\") pod \"d56430b1-227c-4074-8d43-86953ab9f911\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.924171 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-web-config\") pod \"d56430b1-227c-4074-8d43-86953ab9f911\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.924206 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-config\") pod \"d56430b1-227c-4074-8d43-86953ab9f911\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.924261 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-0\") pod \"d56430b1-227c-4074-8d43-86953ab9f911\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.924400 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-thanos-prometheus-http-client-file\") pod \"d56430b1-227c-4074-8d43-86953ab9f911\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.924484 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2g5w\" (UniqueName: \"kubernetes.io/projected/d56430b1-227c-4074-8d43-86953ab9f911-kube-api-access-b2g5w\") pod \"d56430b1-227c-4074-8d43-86953ab9f911\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.924610 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d56430b1-227c-4074-8d43-86953ab9f911-config-out\") pod \"d56430b1-227c-4074-8d43-86953ab9f911\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.924705 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-2\") pod \"d56430b1-227c-4074-8d43-86953ab9f911\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.924824 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "d56430b1-227c-4074-8d43-86953ab9f911" (UID: "d56430b1-227c-4074-8d43-86953ab9f911"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.924879 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7297f1d7-6116-4005-9637-09e45a6844de\") pod \"d56430b1-227c-4074-8d43-86953ab9f911\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.925001 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "d56430b1-227c-4074-8d43-86953ab9f911" (UID: "d56430b1-227c-4074-8d43-86953ab9f911"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.925687 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "d56430b1-227c-4074-8d43-86953ab9f911" (UID: "d56430b1-227c-4074-8d43-86953ab9f911"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.926052 4751 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.926072 4751 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.926084 4751 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.926991 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d56430b1-227c-4074-8d43-86953ab9f911-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "d56430b1-227c-4074-8d43-86953ab9f911" (UID: "d56430b1-227c-4074-8d43-86953ab9f911"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.936431 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d56430b1-227c-4074-8d43-86953ab9f911-config-out" (OuterVolumeSpecName: "config-out") pod "d56430b1-227c-4074-8d43-86953ab9f911" (UID: "d56430b1-227c-4074-8d43-86953ab9f911"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.940161 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-config" (OuterVolumeSpecName: "config") pod "d56430b1-227c-4074-8d43-86953ab9f911" (UID: "d56430b1-227c-4074-8d43-86953ab9f911"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.944719 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d56430b1-227c-4074-8d43-86953ab9f911-kube-api-access-b2g5w" (OuterVolumeSpecName: "kube-api-access-b2g5w") pod "d56430b1-227c-4074-8d43-86953ab9f911" (UID: "d56430b1-227c-4074-8d43-86953ab9f911"). InnerVolumeSpecName "kube-api-access-b2g5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.950601 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "d56430b1-227c-4074-8d43-86953ab9f911" (UID: "d56430b1-227c-4074-8d43-86953ab9f911"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.972114 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-web-config" (OuterVolumeSpecName: "web-config") pod "d56430b1-227c-4074-8d43-86953ab9f911" (UID: "d56430b1-227c-4074-8d43-86953ab9f911"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.988389 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7297f1d7-6116-4005-9637-09e45a6844de" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "d56430b1-227c-4074-8d43-86953ab9f911" (UID: "d56430b1-227c-4074-8d43-86953ab9f911"). InnerVolumeSpecName "pvc-7297f1d7-6116-4005-9637-09e45a6844de". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.027730 4751 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d56430b1-227c-4074-8d43-86953ab9f911-config-out\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.027785 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-7297f1d7-6116-4005-9637-09e45a6844de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7297f1d7-6116-4005-9637-09e45a6844de\") on node \"crc\" " Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.027798 4751 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d56430b1-227c-4074-8d43-86953ab9f911-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.027808 4751 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-web-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.027819 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.027828 4751 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.027839 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2g5w\" (UniqueName: \"kubernetes.io/projected/d56430b1-227c-4074-8d43-86953ab9f911-kube-api-access-b2g5w\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.068464 4751 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.068610 4751 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-7297f1d7-6116-4005-9637-09e45a6844de" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7297f1d7-6116-4005-9637-09e45a6844de") on node "crc" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.139844 4751 reconciler_common.go:293] "Volume detached for volume \"pvc-7297f1d7-6116-4005-9637-09e45a6844de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7297f1d7-6116-4005-9637-09e45a6844de\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.528274 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"89e2e33fdba45eeeea45bc8f0586122609ab7724ff300adcdd8032ef2cbd45f3"} Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.528559 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"241b4e9660fbb61180d91be5352ccc244eeb422fac2e80019c074b1eb101d492"} Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.528570 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"9cf88493556e27c29ac9ce94fd84a890a829378eeb33c3920da155c748578804"} Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.528580 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"14a11d7b12287a9f9a1dae58494a3df13105c63a9fbc0ce0e54f7e7cc214e4bc"} Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.530536 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d56430b1-227c-4074-8d43-86953ab9f911","Type":"ContainerDied","Data":"14a262a32c578ab480de0003e92d828da04b4354e1d5c9b7efbfca95d406a828"} Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.530574 4751 scope.go:117] "RemoveContainer" containerID="a9ca9e07790f5d346c1a4232c0516dbab0611a34ac86ef5489631c5577ce240b" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.530611 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.546309 4751 scope.go:117] "RemoveContainer" containerID="28f17c906b227a5af5cc4ace126e147801603741df157b46ecf7a45821fe1d9f" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.559719 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.571009 4751 scope.go:117] "RemoveContainer" containerID="5e628f84f2b1e0bad7fea4bb8e7b42341154ce8e229a8f36477102accdee0cfb" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.571563 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.592245 4751 scope.go:117] "RemoveContainer" containerID="0e8380c6ff95a924287e8674599018ad6d281082245c17624c192e7eea73966f" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.593614 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 21:37:16 crc kubenswrapper[4751]: E0130 21:37:16.594044 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9572d2c-ecb1-4249-9cc0-9a3881e6960c" containerName="ovn-config" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.594062 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9572d2c-ecb1-4249-9cc0-9a3881e6960c" containerName="ovn-config" Jan 30 21:37:16 crc kubenswrapper[4751]: E0130 21:37:16.594087 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="thanos-sidecar" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.594093 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="thanos-sidecar" Jan 30 21:37:16 crc kubenswrapper[4751]: E0130 21:37:16.594108 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="config-reloader" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.594115 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="config-reloader" Jan 30 21:37:16 crc kubenswrapper[4751]: E0130 21:37:16.594129 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="init-config-reloader" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.594134 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="init-config-reloader" Jan 30 21:37:16 crc kubenswrapper[4751]: E0130 21:37:16.594145 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="prometheus" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.594151 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="prometheus" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.594314 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9572d2c-ecb1-4249-9cc0-9a3881e6960c" containerName="ovn-config" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.594345 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="thanos-sidecar" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.594365 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="prometheus" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.594374 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="config-reloader" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.601498 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.603783 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.606062 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.608566 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-hjfsj" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.608737 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.608920 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.609077 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.609221 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.609368 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.609586 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.616147 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.751403 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwpth\" (UniqueName: \"kubernetes.io/projected/3e7af95c-7ba2-4e0b-9947-795d9629744c-kube-api-access-hwpth\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.751699 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.751725 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3e7af95c-7ba2-4e0b-9947-795d9629744c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.751745 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3e7af95c-7ba2-4e0b-9947-795d9629744c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.751764 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-config\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.751784 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7297f1d7-6116-4005-9637-09e45a6844de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7297f1d7-6116-4005-9637-09e45a6844de\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.751806 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.751845 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3e7af95c-7ba2-4e0b-9947-795d9629744c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.751870 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.751939 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3e7af95c-7ba2-4e0b-9947-795d9629744c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.751969 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.751997 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3e7af95c-7ba2-4e0b-9947-795d9629744c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.752018 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.855400 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwpth\" (UniqueName: \"kubernetes.io/projected/3e7af95c-7ba2-4e0b-9947-795d9629744c-kube-api-access-hwpth\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.855439 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.855462 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3e7af95c-7ba2-4e0b-9947-795d9629744c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.855483 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3e7af95c-7ba2-4e0b-9947-795d9629744c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.855498 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-config\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.855519 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7297f1d7-6116-4005-9637-09e45a6844de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7297f1d7-6116-4005-9637-09e45a6844de\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.855542 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.855578 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3e7af95c-7ba2-4e0b-9947-795d9629744c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.855601 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.855641 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3e7af95c-7ba2-4e0b-9947-795d9629744c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.855666 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.855689 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3e7af95c-7ba2-4e0b-9947-795d9629744c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.855706 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.857237 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3e7af95c-7ba2-4e0b-9947-795d9629744c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.857242 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3e7af95c-7ba2-4e0b-9947-795d9629744c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.866000 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3e7af95c-7ba2-4e0b-9947-795d9629744c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.877452 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.877565 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.882041 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.882970 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.884554 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.884893 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3e7af95c-7ba2-4e0b-9947-795d9629744c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.885183 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-config\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.887310 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwpth\" (UniqueName: \"kubernetes.io/projected/3e7af95c-7ba2-4e0b-9947-795d9629744c-kube-api-access-hwpth\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.898053 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3e7af95c-7ba2-4e0b-9947-795d9629744c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.898912 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.898951 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7297f1d7-6116-4005-9637-09e45a6844de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7297f1d7-6116-4005-9637-09e45a6844de\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a95dfeb2de129561acc13a0d8e1495cdeeea1e8a0c06c82206df350d4e35d0bf/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:17 crc kubenswrapper[4751]: I0130 21:37:17.004515 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7297f1d7-6116-4005-9637-09e45a6844de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7297f1d7-6116-4005-9637-09e45a6844de\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:17 crc kubenswrapper[4751]: I0130 21:37:17.036032 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:17 crc kubenswrapper[4751]: I0130 21:37:17.517532 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 21:37:17 crc kubenswrapper[4751]: W0130 21:37:17.834957 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e7af95c_7ba2_4e0b_9947_795d9629744c.slice/crio-38a83d0085596e4fd29a7a96b241b5a7196453da6e5a891acdb5047466f9f5a5 WatchSource:0}: Error finding container 38a83d0085596e4fd29a7a96b241b5a7196453da6e5a891acdb5047466f9f5a5: Status 404 returned error can't find the container with id 38a83d0085596e4fd29a7a96b241b5a7196453da6e5a891acdb5047466f9f5a5 Jan 30 21:37:18 crc kubenswrapper[4751]: I0130 21:37:18.008421 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d56430b1-227c-4074-8d43-86953ab9f911" path="/var/lib/kubelet/pods/d56430b1-227c-4074-8d43-86953ab9f911/volumes" Jan 30 21:37:18 crc kubenswrapper[4751]: I0130 21:37:18.588673 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"029915db9a08c3651c62c1b28ce1b3b157611e1da6b3b0a09e4689e28b329d21"} Jan 30 21:37:18 crc kubenswrapper[4751]: I0130 21:37:18.589231 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"32f4cf26b4d5634a7aa234f7edaf69a22cde94fecef4036f3b4ef628ccd910d9"} Jan 30 21:37:18 crc kubenswrapper[4751]: I0130 21:37:18.589776 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"f9fb362e2221b63f1fac87f00313c82aa062d1442c5161ccbcbb2abb1d5938ae"} Jan 30 21:37:18 crc kubenswrapper[4751]: I0130 21:37:18.598990 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e7af95c-7ba2-4e0b-9947-795d9629744c","Type":"ContainerStarted","Data":"38a83d0085596e4fd29a7a96b241b5a7196453da6e5a891acdb5047466f9f5a5"} Jan 30 21:37:19 crc kubenswrapper[4751]: I0130 21:37:19.617488 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"b6b9c86b1cb21a79478fbfef2c52220c36f6bd72bfcd705ce1c7e42447a24e9f"} Jan 30 21:37:19 crc kubenswrapper[4751]: I0130 21:37:19.617873 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"cf131aae7d90e0d39d7e40de9dc178eddb608f800985f0805442fdcfceac0037"} Jan 30 21:37:19 crc kubenswrapper[4751]: I0130 21:37:19.617891 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"1fc3868720df81441043b8cea6512bca4b7cea3c696f56e7a1d920098fc6f8f7"} Jan 30 21:37:19 crc kubenswrapper[4751]: I0130 21:37:19.617904 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"1bc1ee641cb136a2de0216fd614525dbcc30607450c491622462f9153d0700ef"} Jan 30 21:37:19 crc kubenswrapper[4751]: I0130 21:37:19.619460 4751 generic.go:334] "Generic (PLEG): container finished" podID="32a93444-0221-40b7-9869-428788112ae2" containerID="291bba4a83a01e50f5b8260a72f7589443ccaf7ad2482ecfd294e283e08c6b24" exitCode=0 Jan 30 21:37:19 crc kubenswrapper[4751]: I0130 21:37:19.619505 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r8wrn" event={"ID":"32a93444-0221-40b7-9869-428788112ae2","Type":"ContainerDied","Data":"291bba4a83a01e50f5b8260a72f7589443ccaf7ad2482ecfd294e283e08c6b24"} Jan 30 21:37:19 crc kubenswrapper[4751]: I0130 21:37:19.680002 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=38.555556389 podStartE2EDuration="44.679985321s" podCreationTimestamp="2026-01-30 21:36:35 +0000 UTC" firstStartedPulling="2026-01-30 21:37:11.765601631 +0000 UTC m=+1370.511424290" lastFinishedPulling="2026-01-30 21:37:17.890030553 +0000 UTC m=+1376.635853222" observedRunningTime="2026-01-30 21:37:19.676474117 +0000 UTC m=+1378.422296776" watchObservedRunningTime="2026-01-30 21:37:19.679985321 +0000 UTC m=+1378.425807970" Jan 30 21:37:19 crc kubenswrapper[4751]: I0130 21:37:19.995411 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cplrw"] Jan 30 21:37:19 crc kubenswrapper[4751]: I0130 21:37:19.997420 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:19 crc kubenswrapper[4751]: I0130 21:37:19.999759 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.021061 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cplrw"] Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.031492 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.039068 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.039112 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.039165 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.039254 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrkjh\" (UniqueName: \"kubernetes.io/projected/a7876a87-ce9e-4d67-a296-cfe228be3d3e-kube-api-access-lrkjh\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.039340 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-config\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.039396 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.152474 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.152582 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.152677 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrkjh\" (UniqueName: \"kubernetes.io/projected/a7876a87-ce9e-4d67-a296-cfe228be3d3e-kube-api-access-lrkjh\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.152706 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-config\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.152758 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.152801 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.153576 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.154113 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.155840 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.156505 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.170718 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-config\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.381451 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrkjh\" (UniqueName: \"kubernetes.io/projected/a7876a87-ce9e-4d67-a296-cfe228be3d3e-kube-api-access-lrkjh\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.391544 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.405582 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.614788 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.241736 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cplrw"] Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.498823 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r8wrn" Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.643204 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e7af95c-7ba2-4e0b-9947-795d9629744c","Type":"ContainerStarted","Data":"177d01f9395c57a0704f8e3be47f47ddcda9844296cb5595f9c79bfbfade602b"} Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.645822 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" event={"ID":"a7876a87-ce9e-4d67-a296-cfe228be3d3e","Type":"ContainerStarted","Data":"8aa6a293385188bd134bcd72ff7081e89e7403adf6b36ab32f6d6b5dfd8657b9"} Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.647517 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r8wrn" event={"ID":"32a93444-0221-40b7-9869-428788112ae2","Type":"ContainerDied","Data":"7a175ccea97ecde04362b48b6632c136c6a64e0161ec0c91d00f4d408467a89a"} Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.647562 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a175ccea97ecde04362b48b6632c136c6a64e0161ec0c91d00f4d408467a89a" Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.647585 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r8wrn" Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.682471 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-combined-ca-bundle\") pod \"32a93444-0221-40b7-9869-428788112ae2\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.682814 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcdb2\" (UniqueName: \"kubernetes.io/projected/32a93444-0221-40b7-9869-428788112ae2-kube-api-access-gcdb2\") pod \"32a93444-0221-40b7-9869-428788112ae2\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.683515 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-config-data\") pod \"32a93444-0221-40b7-9869-428788112ae2\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.683707 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-db-sync-config-data\") pod \"32a93444-0221-40b7-9869-428788112ae2\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.686674 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32a93444-0221-40b7-9869-428788112ae2-kube-api-access-gcdb2" (OuterVolumeSpecName: "kube-api-access-gcdb2") pod "32a93444-0221-40b7-9869-428788112ae2" (UID: "32a93444-0221-40b7-9869-428788112ae2"). InnerVolumeSpecName "kube-api-access-gcdb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.696300 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "32a93444-0221-40b7-9869-428788112ae2" (UID: "32a93444-0221-40b7-9869-428788112ae2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.726649 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32a93444-0221-40b7-9869-428788112ae2" (UID: "32a93444-0221-40b7-9869-428788112ae2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.743887 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-config-data" (OuterVolumeSpecName: "config-data") pod "32a93444-0221-40b7-9869-428788112ae2" (UID: "32a93444-0221-40b7-9869-428788112ae2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.789882 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.789912 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcdb2\" (UniqueName: \"kubernetes.io/projected/32a93444-0221-40b7-9869-428788112ae2-kube-api-access-gcdb2\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.789928 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.789937 4751 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.242753 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cplrw"] Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.280773 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-jnhv7"] Jan 30 21:37:22 crc kubenswrapper[4751]: E0130 21:37:22.281304 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a93444-0221-40b7-9869-428788112ae2" containerName="glance-db-sync" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.281344 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a93444-0221-40b7-9869-428788112ae2" containerName="glance-db-sync" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.281626 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="32a93444-0221-40b7-9869-428788112ae2" containerName="glance-db-sync" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.282994 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.304551 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-jnhv7"] Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.401590 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.401650 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rqmp\" (UniqueName: \"kubernetes.io/projected/37ac1bbe-c547-456d-8b0a-0c29a877775c-kube-api-access-9rqmp\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.401818 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.401932 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-config\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.402024 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.402048 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.503797 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.503846 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.503934 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.503961 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rqmp\" (UniqueName: \"kubernetes.io/projected/37ac1bbe-c547-456d-8b0a-0c29a877775c-kube-api-access-9rqmp\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.504006 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.504035 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-config\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.504814 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.504814 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.504936 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.504995 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-config\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.505463 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.520974 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rqmp\" (UniqueName: \"kubernetes.io/projected/37ac1bbe-c547-456d-8b0a-0c29a877775c-kube-api-access-9rqmp\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.599864 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.881290 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-2gxmh"] Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.882961 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2gxmh" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.933416 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-2gxmh"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.024129 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf1f702d-7084-4e85-add9-15c10223d801-operator-scripts\") pod \"barbican-db-create-2gxmh\" (UID: \"bf1f702d-7084-4e85-add9-15c10223d801\") " pod="openstack/barbican-db-create-2gxmh" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.024207 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdlzh\" (UniqueName: \"kubernetes.io/projected/bf1f702d-7084-4e85-add9-15c10223d801-kube-api-access-hdlzh\") pod \"barbican-db-create-2gxmh\" (UID: \"bf1f702d-7084-4e85-add9-15c10223d801\") " pod="openstack/barbican-db-create-2gxmh" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.053775 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-zhgsw"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.055042 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zhgsw" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.092500 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-31bb-account-create-update-w6h5f"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.094281 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-31bb-account-create-update-w6h5f" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.098902 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.125594 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf1f702d-7084-4e85-add9-15c10223d801-operator-scripts\") pod \"barbican-db-create-2gxmh\" (UID: \"bf1f702d-7084-4e85-add9-15c10223d801\") " pod="openstack/barbican-db-create-2gxmh" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.125687 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdlzh\" (UniqueName: \"kubernetes.io/projected/bf1f702d-7084-4e85-add9-15c10223d801-kube-api-access-hdlzh\") pod \"barbican-db-create-2gxmh\" (UID: \"bf1f702d-7084-4e85-add9-15c10223d801\") " pod="openstack/barbican-db-create-2gxmh" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.126860 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zhgsw"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.127469 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf1f702d-7084-4e85-add9-15c10223d801-operator-scripts\") pod \"barbican-db-create-2gxmh\" (UID: \"bf1f702d-7084-4e85-add9-15c10223d801\") " pod="openstack/barbican-db-create-2gxmh" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.155146 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdlzh\" (UniqueName: \"kubernetes.io/projected/bf1f702d-7084-4e85-add9-15c10223d801-kube-api-access-hdlzh\") pod \"barbican-db-create-2gxmh\" (UID: \"bf1f702d-7084-4e85-add9-15c10223d801\") " pod="openstack/barbican-db-create-2gxmh" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.168874 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-31bb-account-create-update-w6h5f"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.227176 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxc48\" (UniqueName: \"kubernetes.io/projected/00437219-cb6b-48ad-a0cb-d75b82412ba1-kube-api-access-dxc48\") pod \"cinder-db-create-zhgsw\" (UID: \"00437219-cb6b-48ad-a0cb-d75b82412ba1\") " pod="openstack/cinder-db-create-zhgsw" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.227237 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00437219-cb6b-48ad-a0cb-d75b82412ba1-operator-scripts\") pod \"cinder-db-create-zhgsw\" (UID: \"00437219-cb6b-48ad-a0cb-d75b82412ba1\") " pod="openstack/cinder-db-create-zhgsw" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.227281 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b9f9eed-02b1-4541-8ebb-34826639233b-operator-scripts\") pod \"barbican-31bb-account-create-update-w6h5f\" (UID: \"3b9f9eed-02b1-4541-8ebb-34826639233b\") " pod="openstack/barbican-31bb-account-create-update-w6h5f" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.227314 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t78ct\" (UniqueName: \"kubernetes.io/projected/3b9f9eed-02b1-4541-8ebb-34826639233b-kube-api-access-t78ct\") pod \"barbican-31bb-account-create-update-w6h5f\" (UID: \"3b9f9eed-02b1-4541-8ebb-34826639233b\") " pod="openstack/barbican-31bb-account-create-update-w6h5f" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.259611 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2gxmh" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.322313 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-hr9lv"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.325092 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-hr9lv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.345404 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxc48\" (UniqueName: \"kubernetes.io/projected/00437219-cb6b-48ad-a0cb-d75b82412ba1-kube-api-access-dxc48\") pod \"cinder-db-create-zhgsw\" (UID: \"00437219-cb6b-48ad-a0cb-d75b82412ba1\") " pod="openstack/cinder-db-create-zhgsw" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.345463 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00437219-cb6b-48ad-a0cb-d75b82412ba1-operator-scripts\") pod \"cinder-db-create-zhgsw\" (UID: \"00437219-cb6b-48ad-a0cb-d75b82412ba1\") " pod="openstack/cinder-db-create-zhgsw" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.345530 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b9f9eed-02b1-4541-8ebb-34826639233b-operator-scripts\") pod \"barbican-31bb-account-create-update-w6h5f\" (UID: \"3b9f9eed-02b1-4541-8ebb-34826639233b\") " pod="openstack/barbican-31bb-account-create-update-w6h5f" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.345601 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t78ct\" (UniqueName: \"kubernetes.io/projected/3b9f9eed-02b1-4541-8ebb-34826639233b-kube-api-access-t78ct\") pod \"barbican-31bb-account-create-update-w6h5f\" (UID: \"3b9f9eed-02b1-4541-8ebb-34826639233b\") " pod="openstack/barbican-31bb-account-create-update-w6h5f" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.348834 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00437219-cb6b-48ad-a0cb-d75b82412ba1-operator-scripts\") pod \"cinder-db-create-zhgsw\" (UID: \"00437219-cb6b-48ad-a0cb-d75b82412ba1\") " pod="openstack/cinder-db-create-zhgsw" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.364589 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-hr9lv"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.365407 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxc48\" (UniqueName: \"kubernetes.io/projected/00437219-cb6b-48ad-a0cb-d75b82412ba1-kube-api-access-dxc48\") pod \"cinder-db-create-zhgsw\" (UID: \"00437219-cb6b-48ad-a0cb-d75b82412ba1\") " pod="openstack/cinder-db-create-zhgsw" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.375197 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b9f9eed-02b1-4541-8ebb-34826639233b-operator-scripts\") pod \"barbican-31bb-account-create-update-w6h5f\" (UID: \"3b9f9eed-02b1-4541-8ebb-34826639233b\") " pod="openstack/barbican-31bb-account-create-update-w6h5f" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.415577 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-z99cv"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.417584 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z99cv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.419877 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t78ct\" (UniqueName: \"kubernetes.io/projected/3b9f9eed-02b1-4541-8ebb-34826639233b-kube-api-access-t78ct\") pod \"barbican-31bb-account-create-update-w6h5f\" (UID: \"3b9f9eed-02b1-4541-8ebb-34826639233b\") " pod="openstack/barbican-31bb-account-create-update-w6h5f" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.420063 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.420077 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.420247 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6zjrt" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.420388 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.443028 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-z99cv"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.447860 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb22w\" (UniqueName: \"kubernetes.io/projected/056813ab-3913-42db-afa1-a79cb8e3a3c9-kube-api-access-gb22w\") pod \"heat-db-create-hr9lv\" (UID: \"056813ab-3913-42db-afa1-a79cb8e3a3c9\") " pod="openstack/heat-db-create-hr9lv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.447995 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/056813ab-3913-42db-afa1-a79cb8e3a3c9-operator-scripts\") pod \"heat-db-create-hr9lv\" (UID: \"056813ab-3913-42db-afa1-a79cb8e3a3c9\") " pod="openstack/heat-db-create-hr9lv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.448334 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zhgsw" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.495229 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-f7f7-account-create-update-d88cz"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.497237 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f7f7-account-create-update-d88cz" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.504068 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.515994 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-31bb-account-create-update-w6h5f" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.517634 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f7f7-account-create-update-d88cz"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.537668 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-jnhv7"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.550471 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb22w\" (UniqueName: \"kubernetes.io/projected/056813ab-3913-42db-afa1-a79cb8e3a3c9-kube-api-access-gb22w\") pod \"heat-db-create-hr9lv\" (UID: \"056813ab-3913-42db-afa1-a79cb8e3a3c9\") " pod="openstack/heat-db-create-hr9lv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.550558 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rr69\" (UniqueName: \"kubernetes.io/projected/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-kube-api-access-2rr69\") pod \"keystone-db-sync-z99cv\" (UID: \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\") " pod="openstack/keystone-db-sync-z99cv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.550620 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-config-data\") pod \"keystone-db-sync-z99cv\" (UID: \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\") " pod="openstack/keystone-db-sync-z99cv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.550730 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/056813ab-3913-42db-afa1-a79cb8e3a3c9-operator-scripts\") pod \"heat-db-create-hr9lv\" (UID: \"056813ab-3913-42db-afa1-a79cb8e3a3c9\") " pod="openstack/heat-db-create-hr9lv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.550868 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-combined-ca-bundle\") pod \"keystone-db-sync-z99cv\" (UID: \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\") " pod="openstack/keystone-db-sync-z99cv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.551447 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/056813ab-3913-42db-afa1-a79cb8e3a3c9-operator-scripts\") pod \"heat-db-create-hr9lv\" (UID: \"056813ab-3913-42db-afa1-a79cb8e3a3c9\") " pod="openstack/heat-db-create-hr9lv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.554288 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-2618-account-create-update-fdl95"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.555641 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-2618-account-create-update-fdl95" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.559752 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.567782 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb22w\" (UniqueName: \"kubernetes.io/projected/056813ab-3913-42db-afa1-a79cb8e3a3c9-kube-api-access-gb22w\") pod \"heat-db-create-hr9lv\" (UID: \"056813ab-3913-42db-afa1-a79cb8e3a3c9\") " pod="openstack/heat-db-create-hr9lv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.638687 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-2618-account-create-update-fdl95"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.653210 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rr69\" (UniqueName: \"kubernetes.io/projected/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-kube-api-access-2rr69\") pod \"keystone-db-sync-z99cv\" (UID: \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\") " pod="openstack/keystone-db-sync-z99cv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.653262 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-config-data\") pod \"keystone-db-sync-z99cv\" (UID: \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\") " pod="openstack/keystone-db-sync-z99cv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.653336 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9112f9c-911e-47d4-be64-e6f90fa6fa35-operator-scripts\") pod \"cinder-f7f7-account-create-update-d88cz\" (UID: \"a9112f9c-911e-47d4-be64-e6f90fa6fa35\") " pod="openstack/cinder-f7f7-account-create-update-d88cz" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.653372 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e63e6079-6772-46c3-9ec3-1e01741a210f-operator-scripts\") pod \"heat-2618-account-create-update-fdl95\" (UID: \"e63e6079-6772-46c3-9ec3-1e01741a210f\") " pod="openstack/heat-2618-account-create-update-fdl95" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.653390 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9z5w\" (UniqueName: \"kubernetes.io/projected/e63e6079-6772-46c3-9ec3-1e01741a210f-kube-api-access-c9z5w\") pod \"heat-2618-account-create-update-fdl95\" (UID: \"e63e6079-6772-46c3-9ec3-1e01741a210f\") " pod="openstack/heat-2618-account-create-update-fdl95" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.653475 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-combined-ca-bundle\") pod \"keystone-db-sync-z99cv\" (UID: \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\") " pod="openstack/keystone-db-sync-z99cv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.653495 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdhmt\" (UniqueName: \"kubernetes.io/projected/a9112f9c-911e-47d4-be64-e6f90fa6fa35-kube-api-access-sdhmt\") pod \"cinder-f7f7-account-create-update-d88cz\" (UID: \"a9112f9c-911e-47d4-be64-e6f90fa6fa35\") " pod="openstack/cinder-f7f7-account-create-update-d88cz" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.659944 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-combined-ca-bundle\") pod \"keystone-db-sync-z99cv\" (UID: \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\") " pod="openstack/keystone-db-sync-z99cv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.664487 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-lqv47"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.668056 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lqv47" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.675634 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-config-data\") pod \"keystone-db-sync-z99cv\" (UID: \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\") " pod="openstack/keystone-db-sync-z99cv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.685975 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-0f07-account-create-update-fr6kw"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.696417 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0f07-account-create-update-fr6kw" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.698904 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.699101 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" event={"ID":"a7876a87-ce9e-4d67-a296-cfe228be3d3e","Type":"ContainerStarted","Data":"58af424fb61237df15655de4cccc59760be3dabac1d92e813f637b367e667a53"} Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.704439 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rr69\" (UniqueName: \"kubernetes.io/projected/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-kube-api-access-2rr69\") pod \"keystone-db-sync-z99cv\" (UID: \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\") " pod="openstack/keystone-db-sync-z99cv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.707674 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" event={"ID":"37ac1bbe-c547-456d-8b0a-0c29a877775c","Type":"ContainerStarted","Data":"6864e85d2504e6732a265d6ea2bacb5cab1c5dcba817c3a3b4ad3a6ad9332eef"} Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.729064 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lqv47"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.749153 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0f07-account-create-update-fr6kw"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.755105 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9112f9c-911e-47d4-be64-e6f90fa6fa35-operator-scripts\") pod \"cinder-f7f7-account-create-update-d88cz\" (UID: \"a9112f9c-911e-47d4-be64-e6f90fa6fa35\") " pod="openstack/cinder-f7f7-account-create-update-d88cz" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.755175 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6z72\" (UniqueName: \"kubernetes.io/projected/0297c6e3-62f8-49cc-a073-8bb104949456-kube-api-access-k6z72\") pod \"neutron-db-create-lqv47\" (UID: \"0297c6e3-62f8-49cc-a073-8bb104949456\") " pod="openstack/neutron-db-create-lqv47" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.755206 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e63e6079-6772-46c3-9ec3-1e01741a210f-operator-scripts\") pod \"heat-2618-account-create-update-fdl95\" (UID: \"e63e6079-6772-46c3-9ec3-1e01741a210f\") " pod="openstack/heat-2618-account-create-update-fdl95" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.755242 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9z5w\" (UniqueName: \"kubernetes.io/projected/e63e6079-6772-46c3-9ec3-1e01741a210f-kube-api-access-c9z5w\") pod \"heat-2618-account-create-update-fdl95\" (UID: \"e63e6079-6772-46c3-9ec3-1e01741a210f\") " pod="openstack/heat-2618-account-create-update-fdl95" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.755782 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9112f9c-911e-47d4-be64-e6f90fa6fa35-operator-scripts\") pod \"cinder-f7f7-account-create-update-d88cz\" (UID: \"a9112f9c-911e-47d4-be64-e6f90fa6fa35\") " pod="openstack/cinder-f7f7-account-create-update-d88cz" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.755821 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0297c6e3-62f8-49cc-a073-8bb104949456-operator-scripts\") pod \"neutron-db-create-lqv47\" (UID: \"0297c6e3-62f8-49cc-a073-8bb104949456\") " pod="openstack/neutron-db-create-lqv47" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.756115 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdhmt\" (UniqueName: \"kubernetes.io/projected/a9112f9c-911e-47d4-be64-e6f90fa6fa35-kube-api-access-sdhmt\") pod \"cinder-f7f7-account-create-update-d88cz\" (UID: \"a9112f9c-911e-47d4-be64-e6f90fa6fa35\") " pod="openstack/cinder-f7f7-account-create-update-d88cz" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.773051 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e63e6079-6772-46c3-9ec3-1e01741a210f-operator-scripts\") pod \"heat-2618-account-create-update-fdl95\" (UID: \"e63e6079-6772-46c3-9ec3-1e01741a210f\") " pod="openstack/heat-2618-account-create-update-fdl95" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.782189 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdhmt\" (UniqueName: \"kubernetes.io/projected/a9112f9c-911e-47d4-be64-e6f90fa6fa35-kube-api-access-sdhmt\") pod \"cinder-f7f7-account-create-update-d88cz\" (UID: \"a9112f9c-911e-47d4-be64-e6f90fa6fa35\") " pod="openstack/cinder-f7f7-account-create-update-d88cz" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.783490 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9z5w\" (UniqueName: \"kubernetes.io/projected/e63e6079-6772-46c3-9ec3-1e01741a210f-kube-api-access-c9z5w\") pod \"heat-2618-account-create-update-fdl95\" (UID: \"e63e6079-6772-46c3-9ec3-1e01741a210f\") " pod="openstack/heat-2618-account-create-update-fdl95" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.781737 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-hr9lv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.799778 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z99cv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.819917 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f7f7-account-create-update-d88cz" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.858532 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6z72\" (UniqueName: \"kubernetes.io/projected/0297c6e3-62f8-49cc-a073-8bb104949456-kube-api-access-k6z72\") pod \"neutron-db-create-lqv47\" (UID: \"0297c6e3-62f8-49cc-a073-8bb104949456\") " pod="openstack/neutron-db-create-lqv47" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.858592 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c-operator-scripts\") pod \"neutron-0f07-account-create-update-fr6kw\" (UID: \"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c\") " pod="openstack/neutron-0f07-account-create-update-fr6kw" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.858638 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78955\" (UniqueName: \"kubernetes.io/projected/5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c-kube-api-access-78955\") pod \"neutron-0f07-account-create-update-fr6kw\" (UID: \"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c\") " pod="openstack/neutron-0f07-account-create-update-fr6kw" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.858707 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0297c6e3-62f8-49cc-a073-8bb104949456-operator-scripts\") pod \"neutron-db-create-lqv47\" (UID: \"0297c6e3-62f8-49cc-a073-8bb104949456\") " pod="openstack/neutron-db-create-lqv47" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.859426 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0297c6e3-62f8-49cc-a073-8bb104949456-operator-scripts\") pod \"neutron-db-create-lqv47\" (UID: \"0297c6e3-62f8-49cc-a073-8bb104949456\") " pod="openstack/neutron-db-create-lqv47" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.875432 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6z72\" (UniqueName: \"kubernetes.io/projected/0297c6e3-62f8-49cc-a073-8bb104949456-kube-api-access-k6z72\") pod \"neutron-db-create-lqv47\" (UID: \"0297c6e3-62f8-49cc-a073-8bb104949456\") " pod="openstack/neutron-db-create-lqv47" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.879858 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-2618-account-create-update-fdl95" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.961237 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c-operator-scripts\") pod \"neutron-0f07-account-create-update-fr6kw\" (UID: \"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c\") " pod="openstack/neutron-0f07-account-create-update-fr6kw" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.961561 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78955\" (UniqueName: \"kubernetes.io/projected/5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c-kube-api-access-78955\") pod \"neutron-0f07-account-create-update-fr6kw\" (UID: \"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c\") " pod="openstack/neutron-0f07-account-create-update-fr6kw" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.962683 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c-operator-scripts\") pod \"neutron-0f07-account-create-update-fr6kw\" (UID: \"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c\") " pod="openstack/neutron-0f07-account-create-update-fr6kw" Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.012744 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78955\" (UniqueName: \"kubernetes.io/projected/5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c-kube-api-access-78955\") pod \"neutron-0f07-account-create-update-fr6kw\" (UID: \"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c\") " pod="openstack/neutron-0f07-account-create-update-fr6kw" Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.054783 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lqv47" Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.079415 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-2gxmh"] Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.079882 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0f07-account-create-update-fr6kw" Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.207744 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zhgsw"] Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.222766 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-31bb-account-create-update-w6h5f"] Jan 30 21:37:24 crc kubenswrapper[4751]: W0130 21:37:24.225664 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b9f9eed_02b1_4541_8ebb_34826639233b.slice/crio-d0f85c50ce506d7335344aa42ebdeec0dc51661e674c8577501465433e6f645a WatchSource:0}: Error finding container d0f85c50ce506d7335344aa42ebdeec0dc51661e674c8577501465433e6f645a: Status 404 returned error can't find the container with id d0f85c50ce506d7335344aa42ebdeec0dc51661e674c8577501465433e6f645a Jan 30 21:37:24 crc kubenswrapper[4751]: W0130 21:37:24.229136 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00437219_cb6b_48ad_a0cb_d75b82412ba1.slice/crio-30cf4d0dacd7fd4fd22cf05d9b571d841adae3d71171a81d492efaee7e75a8c2 WatchSource:0}: Error finding container 30cf4d0dacd7fd4fd22cf05d9b571d841adae3d71171a81d492efaee7e75a8c2: Status 404 returned error can't find the container with id 30cf4d0dacd7fd4fd22cf05d9b571d841adae3d71171a81d492efaee7e75a8c2 Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.491370 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-hr9lv"] Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.630484 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-2618-account-create-update-fdl95"] Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.647196 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f7f7-account-create-update-d88cz"] Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.658688 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-z99cv"] Jan 30 21:37:24 crc kubenswrapper[4751]: W0130 21:37:24.673545 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bc5d80d_ae17_431d_8e0f_6003af0fa6b1.slice/crio-217787355da3f9922a4df5837ffe3d8b89b3671ee38fbdd59e2d1b3a45833a3c WatchSource:0}: Error finding container 217787355da3f9922a4df5837ffe3d8b89b3671ee38fbdd59e2d1b3a45833a3c: Status 404 returned error can't find the container with id 217787355da3f9922a4df5837ffe3d8b89b3671ee38fbdd59e2d1b3a45833a3c Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.747518 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2gxmh" event={"ID":"bf1f702d-7084-4e85-add9-15c10223d801","Type":"ContainerStarted","Data":"081f6f9ef52a04848276eea3741fadf9bc134d70d5112e179f163c9ecb46984e"} Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.747879 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2gxmh" event={"ID":"bf1f702d-7084-4e85-add9-15c10223d801","Type":"ContainerStarted","Data":"f37db6345203c46035cf2c18f2b4711cfad519df170163afdd9effa521c52d7f"} Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.749940 4751 generic.go:334] "Generic (PLEG): container finished" podID="37ac1bbe-c547-456d-8b0a-0c29a877775c" containerID="3fe25ad7467fd6a359800e1d2c4132e606e75ea363c0815021b0fb7427ca7b89" exitCode=0 Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.749988 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" event={"ID":"37ac1bbe-c547-456d-8b0a-0c29a877775c","Type":"ContainerDied","Data":"3fe25ad7467fd6a359800e1d2c4132e606e75ea363c0815021b0fb7427ca7b89"} Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.767567 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-2gxmh" podStartSLOduration=2.767548342 podStartE2EDuration="2.767548342s" podCreationTimestamp="2026-01-30 21:37:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:37:24.767198763 +0000 UTC m=+1383.513021412" watchObservedRunningTime="2026-01-30 21:37:24.767548342 +0000 UTC m=+1383.513370991" Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.776625 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-31bb-account-create-update-w6h5f" event={"ID":"3b9f9eed-02b1-4541-8ebb-34826639233b","Type":"ContainerStarted","Data":"51523e8d2edcc2046cb1a83c98d7a2fbd7964b697b149674907f1751f57faefe"} Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.776670 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-31bb-account-create-update-w6h5f" event={"ID":"3b9f9eed-02b1-4541-8ebb-34826639233b","Type":"ContainerStarted","Data":"d0f85c50ce506d7335344aa42ebdeec0dc51661e674c8577501465433e6f645a"} Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.810123 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-hr9lv" event={"ID":"056813ab-3913-42db-afa1-a79cb8e3a3c9","Type":"ContainerStarted","Data":"e1328141a3657eae04671c0e1b8d5daf9d2fd7acd381b279a7654fac0a691f97"} Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.818018 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zhgsw" event={"ID":"00437219-cb6b-48ad-a0cb-d75b82412ba1","Type":"ContainerStarted","Data":"30cf4d0dacd7fd4fd22cf05d9b571d841adae3d71171a81d492efaee7e75a8c2"} Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.830306 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f7f7-account-create-update-d88cz" event={"ID":"a9112f9c-911e-47d4-be64-e6f90fa6fa35","Type":"ContainerStarted","Data":"eb7be4270b42ffce4b76f159858fa9a7aa3755769d6109ee488fcb1be59f44c4"} Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.840909 4751 generic.go:334] "Generic (PLEG): container finished" podID="a7876a87-ce9e-4d67-a296-cfe228be3d3e" containerID="58af424fb61237df15655de4cccc59760be3dabac1d92e813f637b367e667a53" exitCode=0 Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.841024 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" event={"ID":"a7876a87-ce9e-4d67-a296-cfe228be3d3e","Type":"ContainerDied","Data":"58af424fb61237df15655de4cccc59760be3dabac1d92e813f637b367e667a53"} Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.842840 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z99cv" event={"ID":"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1","Type":"ContainerStarted","Data":"217787355da3f9922a4df5837ffe3d8b89b3671ee38fbdd59e2d1b3a45833a3c"} Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.845183 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-31bb-account-create-update-w6h5f" podStartSLOduration=1.845163302 podStartE2EDuration="1.845163302s" podCreationTimestamp="2026-01-30 21:37:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:37:24.808624937 +0000 UTC m=+1383.554447586" watchObservedRunningTime="2026-01-30 21:37:24.845163302 +0000 UTC m=+1383.590985941" Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.845521 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-2618-account-create-update-fdl95" event={"ID":"e63e6079-6772-46c3-9ec3-1e01741a210f","Type":"ContainerStarted","Data":"15f677f718a80e7a19e65a1e37fb95a181277e9a1d14c765d1593019e2e9f3c0"} Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.897958 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-zhgsw" podStartSLOduration=2.8979377509999997 podStartE2EDuration="2.897937751s" podCreationTimestamp="2026-01-30 21:37:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:37:24.832095224 +0000 UTC m=+1383.577917873" watchObservedRunningTime="2026-01-30 21:37:24.897937751 +0000 UTC m=+1383.643760400" Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.898452 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lqv47"] Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.081174 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0f07-account-create-update-fr6kw"] Jan 30 21:37:25 crc kubenswrapper[4751]: W0130 21:37:25.163920 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f3bf7e5_bd0d_46f0_b5bf_86fe9e6c428c.slice/crio-a4945e2a89acdb4e8102ab0d694b98c9850bbfc9f1b5e7a4eafc6349cfa65fb3 WatchSource:0}: Error finding container a4945e2a89acdb4e8102ab0d694b98c9850bbfc9f1b5e7a4eafc6349cfa65fb3: Status 404 returned error can't find the container with id a4945e2a89acdb4e8102ab0d694b98c9850bbfc9f1b5e7a4eafc6349cfa65fb3 Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.447959 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.563585 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-config\") pod \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.563721 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-dns-swift-storage-0\") pod \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.563751 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrkjh\" (UniqueName: \"kubernetes.io/projected/a7876a87-ce9e-4d67-a296-cfe228be3d3e-kube-api-access-lrkjh\") pod \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.563851 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-dns-svc\") pod \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.563918 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-ovsdbserver-nb\") pod \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.564029 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-ovsdbserver-sb\") pod \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.576431 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7876a87-ce9e-4d67-a296-cfe228be3d3e-kube-api-access-lrkjh" (OuterVolumeSpecName: "kube-api-access-lrkjh") pod "a7876a87-ce9e-4d67-a296-cfe228be3d3e" (UID: "a7876a87-ce9e-4d67-a296-cfe228be3d3e"). InnerVolumeSpecName "kube-api-access-lrkjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.636928 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a7876a87-ce9e-4d67-a296-cfe228be3d3e" (UID: "a7876a87-ce9e-4d67-a296-cfe228be3d3e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.652503 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a7876a87-ce9e-4d67-a296-cfe228be3d3e" (UID: "a7876a87-ce9e-4d67-a296-cfe228be3d3e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.669925 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a7876a87-ce9e-4d67-a296-cfe228be3d3e" (UID: "a7876a87-ce9e-4d67-a296-cfe228be3d3e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.671473 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.671723 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrkjh\" (UniqueName: \"kubernetes.io/projected/a7876a87-ce9e-4d67-a296-cfe228be3d3e-kube-api-access-lrkjh\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.671828 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.671899 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.671728 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-config" (OuterVolumeSpecName: "config") pod "a7876a87-ce9e-4d67-a296-cfe228be3d3e" (UID: "a7876a87-ce9e-4d67-a296-cfe228be3d3e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.713796 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a7876a87-ce9e-4d67-a296-cfe228be3d3e" (UID: "a7876a87-ce9e-4d67-a296-cfe228be3d3e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.774226 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.774272 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.864727 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" event={"ID":"37ac1bbe-c547-456d-8b0a-0c29a877775c","Type":"ContainerStarted","Data":"5964e334ee213f037ca3d06aae948d2d9e897aa60cd6fd6594177910b8efb612"} Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.865975 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.872427 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9f9eed-02b1-4541-8ebb-34826639233b" containerID="51523e8d2edcc2046cb1a83c98d7a2fbd7964b697b149674907f1751f57faefe" exitCode=0 Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.872586 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-31bb-account-create-update-w6h5f" event={"ID":"3b9f9eed-02b1-4541-8ebb-34826639233b","Type":"ContainerDied","Data":"51523e8d2edcc2046cb1a83c98d7a2fbd7964b697b149674907f1751f57faefe"} Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.880035 4751 generic.go:334] "Generic (PLEG): container finished" podID="e63e6079-6772-46c3-9ec3-1e01741a210f" containerID="c5ab688f9b8e1fb82010bd34dac14cc2f514cc43545c635a532a50efe0bee3a6" exitCode=0 Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.880105 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-2618-account-create-update-fdl95" event={"ID":"e63e6079-6772-46c3-9ec3-1e01741a210f","Type":"ContainerDied","Data":"c5ab688f9b8e1fb82010bd34dac14cc2f514cc43545c635a532a50efe0bee3a6"} Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.886868 4751 generic.go:334] "Generic (PLEG): container finished" podID="00437219-cb6b-48ad-a0cb-d75b82412ba1" containerID="9b73d59359bfb3a5bef8ccdbc1b9174270c6e66e22c29e992c6a512a45cd76ed" exitCode=0 Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.886938 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zhgsw" event={"ID":"00437219-cb6b-48ad-a0cb-d75b82412ba1","Type":"ContainerDied","Data":"9b73d59359bfb3a5bef8ccdbc1b9174270c6e66e22c29e992c6a512a45cd76ed"} Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.894033 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lqv47" event={"ID":"0297c6e3-62f8-49cc-a073-8bb104949456","Type":"ContainerStarted","Data":"757678878d640ed42bebe096fefc08e81d2dc4fdaa39596d495dfc07a6e988a4"} Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.894084 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lqv47" event={"ID":"0297c6e3-62f8-49cc-a073-8bb104949456","Type":"ContainerStarted","Data":"e3f3f61970d72f82ffba4d9a80464a10e8b9c51e7583102951e1de7d389e2988"} Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.896741 4751 generic.go:334] "Generic (PLEG): container finished" podID="a9112f9c-911e-47d4-be64-e6f90fa6fa35" containerID="dbe7739dccd34474fee5592432c44f2757e5e43cc8cb53f953f6011cf0eab9eb" exitCode=0 Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.896799 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f7f7-account-create-update-d88cz" event={"ID":"a9112f9c-911e-47d4-be64-e6f90fa6fa35","Type":"ContainerDied","Data":"dbe7739dccd34474fee5592432c44f2757e5e43cc8cb53f953f6011cf0eab9eb"} Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.897192 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" podStartSLOduration=3.897178048 podStartE2EDuration="3.897178048s" podCreationTimestamp="2026-01-30 21:37:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:37:25.888004663 +0000 UTC m=+1384.633827312" watchObservedRunningTime="2026-01-30 21:37:25.897178048 +0000 UTC m=+1384.643000697" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.898894 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0f07-account-create-update-fr6kw" event={"ID":"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c","Type":"ContainerStarted","Data":"5e236b245c56a064616f5c0cfe68da26d9003a62ee339d2b96a7cc68c86cbcf4"} Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.898927 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0f07-account-create-update-fr6kw" event={"ID":"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c","Type":"ContainerStarted","Data":"a4945e2a89acdb4e8102ab0d694b98c9850bbfc9f1b5e7a4eafc6349cfa65fb3"} Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.900497 4751 generic.go:334] "Generic (PLEG): container finished" podID="bf1f702d-7084-4e85-add9-15c10223d801" containerID="081f6f9ef52a04848276eea3741fadf9bc134d70d5112e179f163c9ecb46984e" exitCode=0 Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.900535 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2gxmh" event={"ID":"bf1f702d-7084-4e85-add9-15c10223d801","Type":"ContainerDied","Data":"081f6f9ef52a04848276eea3741fadf9bc134d70d5112e179f163c9ecb46984e"} Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.901474 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" event={"ID":"a7876a87-ce9e-4d67-a296-cfe228be3d3e","Type":"ContainerDied","Data":"8aa6a293385188bd134bcd72ff7081e89e7403adf6b36ab32f6d6b5dfd8657b9"} Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.901505 4751 scope.go:117] "RemoveContainer" containerID="58af424fb61237df15655de4cccc59760be3dabac1d92e813f637b367e667a53" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.901608 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.934030 4751 generic.go:334] "Generic (PLEG): container finished" podID="056813ab-3913-42db-afa1-a79cb8e3a3c9" containerID="86dc09eda61ac7de53bc29716e31ede7719959b2e5920e15b3c99ca75f4be060" exitCode=0 Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.934079 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-hr9lv" event={"ID":"056813ab-3913-42db-afa1-a79cb8e3a3c9","Type":"ContainerDied","Data":"86dc09eda61ac7de53bc29716e31ede7719959b2e5920e15b3c99ca75f4be060"} Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.992983 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-0f07-account-create-update-fr6kw" podStartSLOduration=2.992967634 podStartE2EDuration="2.992967634s" podCreationTimestamp="2026-01-30 21:37:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:37:25.978674983 +0000 UTC m=+1384.724497642" watchObservedRunningTime="2026-01-30 21:37:25.992967634 +0000 UTC m=+1384.738790283" Jan 30 21:37:26 crc kubenswrapper[4751]: I0130 21:37:26.059226 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cplrw"] Jan 30 21:37:26 crc kubenswrapper[4751]: I0130 21:37:26.067355 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cplrw"] Jan 30 21:37:26 crc kubenswrapper[4751]: I0130 21:37:26.950721 4751 generic.go:334] "Generic (PLEG): container finished" podID="5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c" containerID="5e236b245c56a064616f5c0cfe68da26d9003a62ee339d2b96a7cc68c86cbcf4" exitCode=0 Jan 30 21:37:26 crc kubenswrapper[4751]: I0130 21:37:26.950804 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0f07-account-create-update-fr6kw" event={"ID":"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c","Type":"ContainerDied","Data":"5e236b245c56a064616f5c0cfe68da26d9003a62ee339d2b96a7cc68c86cbcf4"} Jan 30 21:37:26 crc kubenswrapper[4751]: I0130 21:37:26.961279 4751 generic.go:334] "Generic (PLEG): container finished" podID="0297c6e3-62f8-49cc-a073-8bb104949456" containerID="757678878d640ed42bebe096fefc08e81d2dc4fdaa39596d495dfc07a6e988a4" exitCode=0 Jan 30 21:37:26 crc kubenswrapper[4751]: I0130 21:37:26.961412 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lqv47" event={"ID":"0297c6e3-62f8-49cc-a073-8bb104949456","Type":"ContainerDied","Data":"757678878d640ed42bebe096fefc08e81d2dc4fdaa39596d495dfc07a6e988a4"} Jan 30 21:37:27 crc kubenswrapper[4751]: I0130 21:37:27.996289 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7876a87-ce9e-4d67-a296-cfe228be3d3e" path="/var/lib/kubelet/pods/a7876a87-ce9e-4d67-a296-cfe228be3d3e/volumes" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.475391 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lqv47" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.482059 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-2618-account-create-update-fdl95" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.498693 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zhgsw" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.539709 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-31bb-account-create-update-w6h5f" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.549093 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2gxmh" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.554645 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f7f7-account-create-update-d88cz" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.573045 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0f07-account-create-update-fr6kw" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.579809 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-hr9lv" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.596888 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9z5w\" (UniqueName: \"kubernetes.io/projected/e63e6079-6772-46c3-9ec3-1e01741a210f-kube-api-access-c9z5w\") pod \"e63e6079-6772-46c3-9ec3-1e01741a210f\" (UID: \"e63e6079-6772-46c3-9ec3-1e01741a210f\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.596966 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6z72\" (UniqueName: \"kubernetes.io/projected/0297c6e3-62f8-49cc-a073-8bb104949456-kube-api-access-k6z72\") pod \"0297c6e3-62f8-49cc-a073-8bb104949456\" (UID: \"0297c6e3-62f8-49cc-a073-8bb104949456\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.596982 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0297c6e3-62f8-49cc-a073-8bb104949456-operator-scripts\") pod \"0297c6e3-62f8-49cc-a073-8bb104949456\" (UID: \"0297c6e3-62f8-49cc-a073-8bb104949456\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.597006 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e63e6079-6772-46c3-9ec3-1e01741a210f-operator-scripts\") pod \"e63e6079-6772-46c3-9ec3-1e01741a210f\" (UID: \"e63e6079-6772-46c3-9ec3-1e01741a210f\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.599201 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e63e6079-6772-46c3-9ec3-1e01741a210f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e63e6079-6772-46c3-9ec3-1e01741a210f" (UID: "e63e6079-6772-46c3-9ec3-1e01741a210f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.599352 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0297c6e3-62f8-49cc-a073-8bb104949456-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0297c6e3-62f8-49cc-a073-8bb104949456" (UID: "0297c6e3-62f8-49cc-a073-8bb104949456"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.607670 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e63e6079-6772-46c3-9ec3-1e01741a210f-kube-api-access-c9z5w" (OuterVolumeSpecName: "kube-api-access-c9z5w") pod "e63e6079-6772-46c3-9ec3-1e01741a210f" (UID: "e63e6079-6772-46c3-9ec3-1e01741a210f"). InnerVolumeSpecName "kube-api-access-c9z5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.617848 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0297c6e3-62f8-49cc-a073-8bb104949456-kube-api-access-k6z72" (OuterVolumeSpecName: "kube-api-access-k6z72") pod "0297c6e3-62f8-49cc-a073-8bb104949456" (UID: "0297c6e3-62f8-49cc-a073-8bb104949456"). InnerVolumeSpecName "kube-api-access-k6z72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.698942 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78955\" (UniqueName: \"kubernetes.io/projected/5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c-kube-api-access-78955\") pod \"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c\" (UID: \"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.699088 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxc48\" (UniqueName: \"kubernetes.io/projected/00437219-cb6b-48ad-a0cb-d75b82412ba1-kube-api-access-dxc48\") pod \"00437219-cb6b-48ad-a0cb-d75b82412ba1\" (UID: \"00437219-cb6b-48ad-a0cb-d75b82412ba1\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.699205 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b9f9eed-02b1-4541-8ebb-34826639233b-operator-scripts\") pod \"3b9f9eed-02b1-4541-8ebb-34826639233b\" (UID: \"3b9f9eed-02b1-4541-8ebb-34826639233b\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.699383 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c-operator-scripts\") pod \"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c\" (UID: \"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.699451 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00437219-cb6b-48ad-a0cb-d75b82412ba1-operator-scripts\") pod \"00437219-cb6b-48ad-a0cb-d75b82412ba1\" (UID: \"00437219-cb6b-48ad-a0cb-d75b82412ba1\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.699532 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/056813ab-3913-42db-afa1-a79cb8e3a3c9-operator-scripts\") pod \"056813ab-3913-42db-afa1-a79cb8e3a3c9\" (UID: \"056813ab-3913-42db-afa1-a79cb8e3a3c9\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.699616 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9112f9c-911e-47d4-be64-e6f90fa6fa35-operator-scripts\") pod \"a9112f9c-911e-47d4-be64-e6f90fa6fa35\" (UID: \"a9112f9c-911e-47d4-be64-e6f90fa6fa35\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.699725 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t78ct\" (UniqueName: \"kubernetes.io/projected/3b9f9eed-02b1-4541-8ebb-34826639233b-kube-api-access-t78ct\") pod \"3b9f9eed-02b1-4541-8ebb-34826639233b\" (UID: \"3b9f9eed-02b1-4541-8ebb-34826639233b\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.699789 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gb22w\" (UniqueName: \"kubernetes.io/projected/056813ab-3913-42db-afa1-a79cb8e3a3c9-kube-api-access-gb22w\") pod \"056813ab-3913-42db-afa1-a79cb8e3a3c9\" (UID: \"056813ab-3913-42db-afa1-a79cb8e3a3c9\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.699881 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf1f702d-7084-4e85-add9-15c10223d801-operator-scripts\") pod \"bf1f702d-7084-4e85-add9-15c10223d801\" (UID: \"bf1f702d-7084-4e85-add9-15c10223d801\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.699968 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdhmt\" (UniqueName: \"kubernetes.io/projected/a9112f9c-911e-47d4-be64-e6f90fa6fa35-kube-api-access-sdhmt\") pod \"a9112f9c-911e-47d4-be64-e6f90fa6fa35\" (UID: \"a9112f9c-911e-47d4-be64-e6f90fa6fa35\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.700049 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdlzh\" (UniqueName: \"kubernetes.io/projected/bf1f702d-7084-4e85-add9-15c10223d801-kube-api-access-hdlzh\") pod \"bf1f702d-7084-4e85-add9-15c10223d801\" (UID: \"bf1f702d-7084-4e85-add9-15c10223d801\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.700511 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9z5w\" (UniqueName: \"kubernetes.io/projected/e63e6079-6772-46c3-9ec3-1e01741a210f-kube-api-access-c9z5w\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.700588 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6z72\" (UniqueName: \"kubernetes.io/projected/0297c6e3-62f8-49cc-a073-8bb104949456-kube-api-access-k6z72\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.700644 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0297c6e3-62f8-49cc-a073-8bb104949456-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.700695 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e63e6079-6772-46c3-9ec3-1e01741a210f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.701430 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b9f9eed-02b1-4541-8ebb-34826639233b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b9f9eed-02b1-4541-8ebb-34826639233b" (UID: "3b9f9eed-02b1-4541-8ebb-34826639233b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.701801 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c" (UID: "5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.701930 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00437219-cb6b-48ad-a0cb-d75b82412ba1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "00437219-cb6b-48ad-a0cb-d75b82412ba1" (UID: "00437219-cb6b-48ad-a0cb-d75b82412ba1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.702257 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9112f9c-911e-47d4-be64-e6f90fa6fa35-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9112f9c-911e-47d4-be64-e6f90fa6fa35" (UID: "a9112f9c-911e-47d4-be64-e6f90fa6fa35"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.703026 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/056813ab-3913-42db-afa1-a79cb8e3a3c9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "056813ab-3913-42db-afa1-a79cb8e3a3c9" (UID: "056813ab-3913-42db-afa1-a79cb8e3a3c9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.703130 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf1f702d-7084-4e85-add9-15c10223d801-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bf1f702d-7084-4e85-add9-15c10223d801" (UID: "bf1f702d-7084-4e85-add9-15c10223d801"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.704223 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00437219-cb6b-48ad-a0cb-d75b82412ba1-kube-api-access-dxc48" (OuterVolumeSpecName: "kube-api-access-dxc48") pod "00437219-cb6b-48ad-a0cb-d75b82412ba1" (UID: "00437219-cb6b-48ad-a0cb-d75b82412ba1"). InnerVolumeSpecName "kube-api-access-dxc48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.705747 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c-kube-api-access-78955" (OuterVolumeSpecName: "kube-api-access-78955") pod "5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c" (UID: "5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c"). InnerVolumeSpecName "kube-api-access-78955". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.706293 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9112f9c-911e-47d4-be64-e6f90fa6fa35-kube-api-access-sdhmt" (OuterVolumeSpecName: "kube-api-access-sdhmt") pod "a9112f9c-911e-47d4-be64-e6f90fa6fa35" (UID: "a9112f9c-911e-47d4-be64-e6f90fa6fa35"). InnerVolumeSpecName "kube-api-access-sdhmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.706995 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf1f702d-7084-4e85-add9-15c10223d801-kube-api-access-hdlzh" (OuterVolumeSpecName: "kube-api-access-hdlzh") pod "bf1f702d-7084-4e85-add9-15c10223d801" (UID: "bf1f702d-7084-4e85-add9-15c10223d801"). InnerVolumeSpecName "kube-api-access-hdlzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.709014 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/056813ab-3913-42db-afa1-a79cb8e3a3c9-kube-api-access-gb22w" (OuterVolumeSpecName: "kube-api-access-gb22w") pod "056813ab-3913-42db-afa1-a79cb8e3a3c9" (UID: "056813ab-3913-42db-afa1-a79cb8e3a3c9"). InnerVolumeSpecName "kube-api-access-gb22w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.717978 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b9f9eed-02b1-4541-8ebb-34826639233b-kube-api-access-t78ct" (OuterVolumeSpecName: "kube-api-access-t78ct") pod "3b9f9eed-02b1-4541-8ebb-34826639233b" (UID: "3b9f9eed-02b1-4541-8ebb-34826639233b"). InnerVolumeSpecName "kube-api-access-t78ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.804354 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78955\" (UniqueName: \"kubernetes.io/projected/5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c-kube-api-access-78955\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.805202 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxc48\" (UniqueName: \"kubernetes.io/projected/00437219-cb6b-48ad-a0cb-d75b82412ba1-kube-api-access-dxc48\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.805238 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b9f9eed-02b1-4541-8ebb-34826639233b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.805258 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.805276 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00437219-cb6b-48ad-a0cb-d75b82412ba1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.805292 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/056813ab-3913-42db-afa1-a79cb8e3a3c9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.805309 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9112f9c-911e-47d4-be64-e6f90fa6fa35-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.805350 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t78ct\" (UniqueName: \"kubernetes.io/projected/3b9f9eed-02b1-4541-8ebb-34826639233b-kube-api-access-t78ct\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.805367 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gb22w\" (UniqueName: \"kubernetes.io/projected/056813ab-3913-42db-afa1-a79cb8e3a3c9-kube-api-access-gb22w\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.805384 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf1f702d-7084-4e85-add9-15c10223d801-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.805421 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdhmt\" (UniqueName: \"kubernetes.io/projected/a9112f9c-911e-47d4-be64-e6f90fa6fa35-kube-api-access-sdhmt\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.805439 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdlzh\" (UniqueName: \"kubernetes.io/projected/bf1f702d-7084-4e85-add9-15c10223d801-kube-api-access-hdlzh\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.008074 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lqv47" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.008086 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lqv47" event={"ID":"0297c6e3-62f8-49cc-a073-8bb104949456","Type":"ContainerDied","Data":"e3f3f61970d72f82ffba4d9a80464a10e8b9c51e7583102951e1de7d389e2988"} Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.008169 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3f3f61970d72f82ffba4d9a80464a10e8b9c51e7583102951e1de7d389e2988" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.012176 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2gxmh" event={"ID":"bf1f702d-7084-4e85-add9-15c10223d801","Type":"ContainerDied","Data":"f37db6345203c46035cf2c18f2b4711cfad519df170163afdd9effa521c52d7f"} Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.012220 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f37db6345203c46035cf2c18f2b4711cfad519df170163afdd9effa521c52d7f" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.012287 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2gxmh" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.015183 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f7f7-account-create-update-d88cz" event={"ID":"a9112f9c-911e-47d4-be64-e6f90fa6fa35","Type":"ContainerDied","Data":"eb7be4270b42ffce4b76f159858fa9a7aa3755769d6109ee488fcb1be59f44c4"} Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.015232 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb7be4270b42ffce4b76f159858fa9a7aa3755769d6109ee488fcb1be59f44c4" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.015316 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f7f7-account-create-update-d88cz" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.017904 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z99cv" event={"ID":"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1","Type":"ContainerStarted","Data":"dde509ef6f207cc2bcc76a35805e737a06489616a2c06460edf270c4d46949ff"} Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.023167 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0f07-account-create-update-fr6kw" event={"ID":"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c","Type":"ContainerDied","Data":"a4945e2a89acdb4e8102ab0d694b98c9850bbfc9f1b5e7a4eafc6349cfa65fb3"} Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.023233 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4945e2a89acdb4e8102ab0d694b98c9850bbfc9f1b5e7a4eafc6349cfa65fb3" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.023208 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0f07-account-create-update-fr6kw" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.040538 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-2618-account-create-update-fdl95" event={"ID":"e63e6079-6772-46c3-9ec3-1e01741a210f","Type":"ContainerDied","Data":"15f677f718a80e7a19e65a1e37fb95a181277e9a1d14c765d1593019e2e9f3c0"} Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.040580 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15f677f718a80e7a19e65a1e37fb95a181277e9a1d14c765d1593019e2e9f3c0" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.040682 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-2618-account-create-update-fdl95" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.043435 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zhgsw" event={"ID":"00437219-cb6b-48ad-a0cb-d75b82412ba1","Type":"ContainerDied","Data":"30cf4d0dacd7fd4fd22cf05d9b571d841adae3d71171a81d492efaee7e75a8c2"} Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.043523 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30cf4d0dacd7fd4fd22cf05d9b571d841adae3d71171a81d492efaee7e75a8c2" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.043671 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zhgsw" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.049642 4751 generic.go:334] "Generic (PLEG): container finished" podID="3e7af95c-7ba2-4e0b-9947-795d9629744c" containerID="177d01f9395c57a0704f8e3be47f47ddcda9844296cb5595f9c79bfbfade602b" exitCode=0 Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.049765 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e7af95c-7ba2-4e0b-9947-795d9629744c","Type":"ContainerDied","Data":"177d01f9395c57a0704f8e3be47f47ddcda9844296cb5595f9c79bfbfade602b"} Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.051279 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-z99cv" podStartSLOduration=2.508373302 podStartE2EDuration="8.051260384s" podCreationTimestamp="2026-01-30 21:37:23 +0000 UTC" firstStartedPulling="2026-01-30 21:37:24.724563784 +0000 UTC m=+1383.470386433" lastFinishedPulling="2026-01-30 21:37:30.267450866 +0000 UTC m=+1389.013273515" observedRunningTime="2026-01-30 21:37:31.046478626 +0000 UTC m=+1389.792301275" watchObservedRunningTime="2026-01-30 21:37:31.051260384 +0000 UTC m=+1389.797083033" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.053090 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-31bb-account-create-update-w6h5f" event={"ID":"3b9f9eed-02b1-4541-8ebb-34826639233b","Type":"ContainerDied","Data":"d0f85c50ce506d7335344aa42ebdeec0dc51661e674c8577501465433e6f645a"} Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.054035 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0f85c50ce506d7335344aa42ebdeec0dc51661e674c8577501465433e6f645a" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.053157 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-31bb-account-create-update-w6h5f" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.055829 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-hr9lv" event={"ID":"056813ab-3913-42db-afa1-a79cb8e3a3c9","Type":"ContainerDied","Data":"e1328141a3657eae04671c0e1b8d5daf9d2fd7acd381b279a7654fac0a691f97"} Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.055868 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1328141a3657eae04671c0e1b8d5daf9d2fd7acd381b279a7654fac0a691f97" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.055962 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-hr9lv" Jan 30 21:37:32 crc kubenswrapper[4751]: I0130 21:37:32.067443 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e7af95c-7ba2-4e0b-9947-795d9629744c","Type":"ContainerStarted","Data":"b7d4458f747dea98872f2e05c84ba42c153f59bc233d9d19cd52f10f61db8075"} Jan 30 21:37:32 crc kubenswrapper[4751]: I0130 21:37:32.602595 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:32 crc kubenswrapper[4751]: I0130 21:37:32.717969 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4dbml"] Jan 30 21:37:32 crc kubenswrapper[4751]: I0130 21:37:32.718253 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" podUID="eb683b6d-9110-46e1-8406-eea86d9cc73b" containerName="dnsmasq-dns" containerID="cri-o://d13bdba61d4e84c62b4410d765f4f99e77b7c81d9c8b2fd1ad7ff51b9c7b511e" gracePeriod=10 Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.077907 4751 generic.go:334] "Generic (PLEG): container finished" podID="eb683b6d-9110-46e1-8406-eea86d9cc73b" containerID="d13bdba61d4e84c62b4410d765f4f99e77b7c81d9c8b2fd1ad7ff51b9c7b511e" exitCode=0 Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.078233 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" event={"ID":"eb683b6d-9110-46e1-8406-eea86d9cc73b","Type":"ContainerDied","Data":"d13bdba61d4e84c62b4410d765f4f99e77b7c81d9c8b2fd1ad7ff51b9c7b511e"} Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.265531 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.367143 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-ovsdbserver-sb\") pod \"eb683b6d-9110-46e1-8406-eea86d9cc73b\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.367293 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-config\") pod \"eb683b6d-9110-46e1-8406-eea86d9cc73b\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.367379 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58ct9\" (UniqueName: \"kubernetes.io/projected/eb683b6d-9110-46e1-8406-eea86d9cc73b-kube-api-access-58ct9\") pod \"eb683b6d-9110-46e1-8406-eea86d9cc73b\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.367477 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-dns-svc\") pod \"eb683b6d-9110-46e1-8406-eea86d9cc73b\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.367509 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-ovsdbserver-nb\") pod \"eb683b6d-9110-46e1-8406-eea86d9cc73b\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.380418 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb683b6d-9110-46e1-8406-eea86d9cc73b-kube-api-access-58ct9" (OuterVolumeSpecName: "kube-api-access-58ct9") pod "eb683b6d-9110-46e1-8406-eea86d9cc73b" (UID: "eb683b6d-9110-46e1-8406-eea86d9cc73b"). InnerVolumeSpecName "kube-api-access-58ct9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.421362 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-config" (OuterVolumeSpecName: "config") pod "eb683b6d-9110-46e1-8406-eea86d9cc73b" (UID: "eb683b6d-9110-46e1-8406-eea86d9cc73b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.431319 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb683b6d-9110-46e1-8406-eea86d9cc73b" (UID: "eb683b6d-9110-46e1-8406-eea86d9cc73b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.435835 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eb683b6d-9110-46e1-8406-eea86d9cc73b" (UID: "eb683b6d-9110-46e1-8406-eea86d9cc73b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.443251 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eb683b6d-9110-46e1-8406-eea86d9cc73b" (UID: "eb683b6d-9110-46e1-8406-eea86d9cc73b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.469533 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.469566 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58ct9\" (UniqueName: \"kubernetes.io/projected/eb683b6d-9110-46e1-8406-eea86d9cc73b-kube-api-access-58ct9\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.469578 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.469588 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.469596 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:34 crc kubenswrapper[4751]: I0130 21:37:34.114187 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" event={"ID":"eb683b6d-9110-46e1-8406-eea86d9cc73b","Type":"ContainerDied","Data":"91fac1793a7a2b8a269edafca995d78c1aceb7914291bdd22c295ca0ed226b45"} Jan 30 21:37:34 crc kubenswrapper[4751]: I0130 21:37:34.114236 4751 scope.go:117] "RemoveContainer" containerID="d13bdba61d4e84c62b4410d765f4f99e77b7c81d9c8b2fd1ad7ff51b9c7b511e" Jan 30 21:37:34 crc kubenswrapper[4751]: I0130 21:37:34.114298 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:37:34 crc kubenswrapper[4751]: I0130 21:37:34.145974 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4dbml"] Jan 30 21:37:34 crc kubenswrapper[4751]: I0130 21:37:34.157606 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4dbml"] Jan 30 21:37:34 crc kubenswrapper[4751]: I0130 21:37:34.159351 4751 scope.go:117] "RemoveContainer" containerID="ed1388c6eb28c157030933478df87642f4fba3d9c198c284f1958d42816f2e6a" Jan 30 21:37:35 crc kubenswrapper[4751]: I0130 21:37:35.126678 4751 generic.go:334] "Generic (PLEG): container finished" podID="0bc5d80d-ae17-431d-8e0f-6003af0fa6b1" containerID="dde509ef6f207cc2bcc76a35805e737a06489616a2c06460edf270c4d46949ff" exitCode=0 Jan 30 21:37:35 crc kubenswrapper[4751]: I0130 21:37:35.126771 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z99cv" event={"ID":"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1","Type":"ContainerDied","Data":"dde509ef6f207cc2bcc76a35805e737a06489616a2c06460edf270c4d46949ff"} Jan 30 21:37:35 crc kubenswrapper[4751]: I0130 21:37:35.131788 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e7af95c-7ba2-4e0b-9947-795d9629744c","Type":"ContainerStarted","Data":"60cd133862e242523da1850cf2668498e42b1be6ab5b9c3500e00cd6db2d11b9"} Jan 30 21:37:35 crc kubenswrapper[4751]: I0130 21:37:35.131810 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e7af95c-7ba2-4e0b-9947-795d9629744c","Type":"ContainerStarted","Data":"bcfa0cad1d668d9d2687e92590ddc5f1eeb541c940e42b19186ed9cd5552b432"} Jan 30 21:37:35 crc kubenswrapper[4751]: I0130 21:37:35.196796 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=19.196769484 podStartE2EDuration="19.196769484s" podCreationTimestamp="2026-01-30 21:37:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:37:35.185511523 +0000 UTC m=+1393.931334172" watchObservedRunningTime="2026-01-30 21:37:35.196769484 +0000 UTC m=+1393.942592173" Jan 30 21:37:35 crc kubenswrapper[4751]: I0130 21:37:35.999124 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb683b6d-9110-46e1-8406-eea86d9cc73b" path="/var/lib/kubelet/pods/eb683b6d-9110-46e1-8406-eea86d9cc73b/volumes" Jan 30 21:37:36 crc kubenswrapper[4751]: I0130 21:37:36.590734 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z99cv" Jan 30 21:37:36 crc kubenswrapper[4751]: I0130 21:37:36.735027 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-combined-ca-bundle\") pod \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\" (UID: \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\") " Jan 30 21:37:36 crc kubenswrapper[4751]: I0130 21:37:36.735531 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rr69\" (UniqueName: \"kubernetes.io/projected/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-kube-api-access-2rr69\") pod \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\" (UID: \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\") " Jan 30 21:37:36 crc kubenswrapper[4751]: I0130 21:37:36.735785 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-config-data\") pod \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\" (UID: \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\") " Jan 30 21:37:36 crc kubenswrapper[4751]: I0130 21:37:36.744667 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-kube-api-access-2rr69" (OuterVolumeSpecName: "kube-api-access-2rr69") pod "0bc5d80d-ae17-431d-8e0f-6003af0fa6b1" (UID: "0bc5d80d-ae17-431d-8e0f-6003af0fa6b1"). InnerVolumeSpecName "kube-api-access-2rr69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:36 crc kubenswrapper[4751]: I0130 21:37:36.770827 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bc5d80d-ae17-431d-8e0f-6003af0fa6b1" (UID: "0bc5d80d-ae17-431d-8e0f-6003af0fa6b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:36 crc kubenswrapper[4751]: I0130 21:37:36.802229 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-config-data" (OuterVolumeSpecName: "config-data") pod "0bc5d80d-ae17-431d-8e0f-6003af0fa6b1" (UID: "0bc5d80d-ae17-431d-8e0f-6003af0fa6b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:36 crc kubenswrapper[4751]: I0130 21:37:36.838239 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:36 crc kubenswrapper[4751]: I0130 21:37:36.838471 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rr69\" (UniqueName: \"kubernetes.io/projected/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-kube-api-access-2rr69\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:36 crc kubenswrapper[4751]: I0130 21:37:36.838548 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.036826 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.155457 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z99cv" event={"ID":"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1","Type":"ContainerDied","Data":"217787355da3f9922a4df5837ffe3d8b89b3671ee38fbdd59e2d1b3a45833a3c"} Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.155513 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="217787355da3f9922a4df5837ffe3d8b89b3671ee38fbdd59e2d1b3a45833a3c" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.155565 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z99cv" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.474115 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-tpqxs"] Jan 30 21:37:37 crc kubenswrapper[4751]: E0130 21:37:37.474763 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf1f702d-7084-4e85-add9-15c10223d801" containerName="mariadb-database-create" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.474781 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf1f702d-7084-4e85-add9-15c10223d801" containerName="mariadb-database-create" Jan 30 21:37:37 crc kubenswrapper[4751]: E0130 21:37:37.474796 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7876a87-ce9e-4d67-a296-cfe228be3d3e" containerName="init" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.474803 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7876a87-ce9e-4d67-a296-cfe228be3d3e" containerName="init" Jan 30 21:37:37 crc kubenswrapper[4751]: E0130 21:37:37.474811 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bc5d80d-ae17-431d-8e0f-6003af0fa6b1" containerName="keystone-db-sync" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.474817 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bc5d80d-ae17-431d-8e0f-6003af0fa6b1" containerName="keystone-db-sync" Jan 30 21:37:37 crc kubenswrapper[4751]: E0130 21:37:37.474838 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="056813ab-3913-42db-afa1-a79cb8e3a3c9" containerName="mariadb-database-create" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.474843 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="056813ab-3913-42db-afa1-a79cb8e3a3c9" containerName="mariadb-database-create" Jan 30 21:37:37 crc kubenswrapper[4751]: E0130 21:37:37.474858 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00437219-cb6b-48ad-a0cb-d75b82412ba1" containerName="mariadb-database-create" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.474864 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="00437219-cb6b-48ad-a0cb-d75b82412ba1" containerName="mariadb-database-create" Jan 30 21:37:37 crc kubenswrapper[4751]: E0130 21:37:37.474873 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9f9eed-02b1-4541-8ebb-34826639233b" containerName="mariadb-account-create-update" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.474879 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9f9eed-02b1-4541-8ebb-34826639233b" containerName="mariadb-account-create-update" Jan 30 21:37:37 crc kubenswrapper[4751]: E0130 21:37:37.474892 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0297c6e3-62f8-49cc-a073-8bb104949456" containerName="mariadb-database-create" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.474898 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="0297c6e3-62f8-49cc-a073-8bb104949456" containerName="mariadb-database-create" Jan 30 21:37:37 crc kubenswrapper[4751]: E0130 21:37:37.474907 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c" containerName="mariadb-account-create-update" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.474913 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c" containerName="mariadb-account-create-update" Jan 30 21:37:37 crc kubenswrapper[4751]: E0130 21:37:37.474926 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb683b6d-9110-46e1-8406-eea86d9cc73b" containerName="dnsmasq-dns" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.474931 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb683b6d-9110-46e1-8406-eea86d9cc73b" containerName="dnsmasq-dns" Jan 30 21:37:37 crc kubenswrapper[4751]: E0130 21:37:37.474938 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e63e6079-6772-46c3-9ec3-1e01741a210f" containerName="mariadb-account-create-update" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.474944 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e63e6079-6772-46c3-9ec3-1e01741a210f" containerName="mariadb-account-create-update" Jan 30 21:37:37 crc kubenswrapper[4751]: E0130 21:37:37.474950 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9112f9c-911e-47d4-be64-e6f90fa6fa35" containerName="mariadb-account-create-update" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.474956 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9112f9c-911e-47d4-be64-e6f90fa6fa35" containerName="mariadb-account-create-update" Jan 30 21:37:37 crc kubenswrapper[4751]: E0130 21:37:37.474971 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb683b6d-9110-46e1-8406-eea86d9cc73b" containerName="init" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.474976 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb683b6d-9110-46e1-8406-eea86d9cc73b" containerName="init" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.475144 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e63e6079-6772-46c3-9ec3-1e01741a210f" containerName="mariadb-account-create-update" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.475157 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7876a87-ce9e-4d67-a296-cfe228be3d3e" containerName="init" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.475167 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb683b6d-9110-46e1-8406-eea86d9cc73b" containerName="dnsmasq-dns" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.475177 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9f9eed-02b1-4541-8ebb-34826639233b" containerName="mariadb-account-create-update" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.475186 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="00437219-cb6b-48ad-a0cb-d75b82412ba1" containerName="mariadb-database-create" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.475196 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf1f702d-7084-4e85-add9-15c10223d801" containerName="mariadb-database-create" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.475208 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="0297c6e3-62f8-49cc-a073-8bb104949456" containerName="mariadb-database-create" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.475219 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c" containerName="mariadb-account-create-update" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.475228 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bc5d80d-ae17-431d-8e0f-6003af0fa6b1" containerName="keystone-db-sync" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.475240 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9112f9c-911e-47d4-be64-e6f90fa6fa35" containerName="mariadb-account-create-update" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.475249 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="056813ab-3913-42db-afa1-a79cb8e3a3c9" containerName="mariadb-database-create" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.475926 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.480232 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6zjrt" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.480650 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.480868 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.481054 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.481293 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.487240 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-64gs8"] Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.501973 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.511774 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tpqxs"] Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.517154 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-64gs8"] Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.553076 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-scripts\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.553381 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-credential-keys\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.553537 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbsxf\" (UniqueName: \"kubernetes.io/projected/a83a8c4a-677e-4481-b671-f5fa6edadb5f-kube-api-access-sbsxf\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.553618 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-fernet-keys\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.553751 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-combined-ca-bundle\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.553844 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-config-data\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.641917 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-npwgd"] Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.645493 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-npwgd" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.650236 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-q572p" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.650532 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.656902 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-credential-keys\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.657064 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.657152 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.657997 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbsxf\" (UniqueName: \"kubernetes.io/projected/a83a8c4a-677e-4481-b671-f5fa6edadb5f-kube-api-access-sbsxf\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.658101 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.658168 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-fernet-keys\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.658290 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-combined-ca-bundle\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.658397 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-config-data\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.658463 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-config\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.658618 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-scripts\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.658701 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtdkx\" (UniqueName: \"kubernetes.io/projected/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-kube-api-access-rtdkx\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.658787 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.665932 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-credential-keys\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.669239 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-combined-ca-bundle\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.673538 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-fernet-keys\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.675440 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-npwgd"] Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.695684 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-scripts\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.697986 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-config-data\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.705972 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbsxf\" (UniqueName: \"kubernetes.io/projected/a83a8c4a-677e-4481-b671-f5fa6edadb5f-kube-api-access-sbsxf\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.766262 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.766345 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxctb\" (UniqueName: \"kubernetes.io/projected/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-kube-api-access-vxctb\") pod \"heat-db-sync-npwgd\" (UID: \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\") " pod="openstack/heat-db-sync-npwgd" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.766380 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.766406 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-combined-ca-bundle\") pod \"heat-db-sync-npwgd\" (UID: \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\") " pod="openstack/heat-db-sync-npwgd" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.766431 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.766459 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-config-data\") pod \"heat-db-sync-npwgd\" (UID: \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\") " pod="openstack/heat-db-sync-npwgd" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.766476 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.766547 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-config\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.766604 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtdkx\" (UniqueName: \"kubernetes.io/projected/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-kube-api-access-rtdkx\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.767642 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.768289 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.769026 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.769723 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.770357 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-config\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.793218 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-lwm4t"] Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.795042 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lwm4t" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.805055 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.805334 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-6hfgl" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.805485 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.812808 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.836456 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtdkx\" (UniqueName: \"kubernetes.io/projected/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-kube-api-access-rtdkx\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.837975 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.873391 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxctb\" (UniqueName: \"kubernetes.io/projected/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-kube-api-access-vxctb\") pod \"heat-db-sync-npwgd\" (UID: \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\") " pod="openstack/heat-db-sync-npwgd" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.873447 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-combined-ca-bundle\") pod \"heat-db-sync-npwgd\" (UID: \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\") " pod="openstack/heat-db-sync-npwgd" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.873485 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-config-data\") pod \"heat-db-sync-npwgd\" (UID: \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\") " pod="openstack/heat-db-sync-npwgd" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.873559 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d42b4031-ca3e-4b28-b62a-eb346132dc3a-config\") pod \"neutron-db-sync-lwm4t\" (UID: \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\") " pod="openstack/neutron-db-sync-lwm4t" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.873586 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42b4031-ca3e-4b28-b62a-eb346132dc3a-combined-ca-bundle\") pod \"neutron-db-sync-lwm4t\" (UID: \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\") " pod="openstack/neutron-db-sync-lwm4t" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.873632 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghgm7\" (UniqueName: \"kubernetes.io/projected/d42b4031-ca3e-4b28-b62a-eb346132dc3a-kube-api-access-ghgm7\") pod \"neutron-db-sync-lwm4t\" (UID: \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\") " pod="openstack/neutron-db-sync-lwm4t" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.879635 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-lwm4t"] Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.896132 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-combined-ca-bundle\") pod \"heat-db-sync-npwgd\" (UID: \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\") " pod="openstack/heat-db-sync-npwgd" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.897157 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-config-data\") pod \"heat-db-sync-npwgd\" (UID: \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\") " pod="openstack/heat-db-sync-npwgd" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.899834 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-bq6lp"] Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.901216 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.908735 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.935573 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.935792 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-s756f" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.959844 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxctb\" (UniqueName: \"kubernetes.io/projected/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-kube-api-access-vxctb\") pod \"heat-db-sync-npwgd\" (UID: \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\") " pod="openstack/heat-db-sync-npwgd" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.974973 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-db-sync-config-data\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.975066 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-config-data\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.975098 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-scripts\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.975124 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d42b4031-ca3e-4b28-b62a-eb346132dc3a-config\") pod \"neutron-db-sync-lwm4t\" (UID: \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\") " pod="openstack/neutron-db-sync-lwm4t" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.975150 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42b4031-ca3e-4b28-b62a-eb346132dc3a-combined-ca-bundle\") pod \"neutron-db-sync-lwm4t\" (UID: \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\") " pod="openstack/neutron-db-sync-lwm4t" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.975180 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xqxr\" (UniqueName: \"kubernetes.io/projected/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-kube-api-access-4xqxr\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.975221 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-combined-ca-bundle\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.975237 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-etc-machine-id\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.975258 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghgm7\" (UniqueName: \"kubernetes.io/projected/d42b4031-ca3e-4b28-b62a-eb346132dc3a-kube-api-access-ghgm7\") pod \"neutron-db-sync-lwm4t\" (UID: \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\") " pod="openstack/neutron-db-sync-lwm4t" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.985006 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42b4031-ca3e-4b28-b62a-eb346132dc3a-combined-ca-bundle\") pod \"neutron-db-sync-lwm4t\" (UID: \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\") " pod="openstack/neutron-db-sync-lwm4t" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.993401 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d42b4031-ca3e-4b28-b62a-eb346132dc3a-config\") pod \"neutron-db-sync-lwm4t\" (UID: \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\") " pod="openstack/neutron-db-sync-lwm4t" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.079777 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghgm7\" (UniqueName: \"kubernetes.io/projected/d42b4031-ca3e-4b28-b62a-eb346132dc3a-kube-api-access-ghgm7\") pod \"neutron-db-sync-lwm4t\" (UID: \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\") " pod="openstack/neutron-db-sync-lwm4t" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.081018 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-scripts\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.081091 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xqxr\" (UniqueName: \"kubernetes.io/projected/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-kube-api-access-4xqxr\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.081132 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-combined-ca-bundle\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.081149 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-etc-machine-id\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.081261 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-db-sync-config-data\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.081288 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-config-data\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.119563 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-bq6lp"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.119609 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-v9spg"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.122918 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.126949 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-combined-ca-bundle\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.085096 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-etc-machine-id\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.128021 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-npwgd" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.142852 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-config-data\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.152670 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-8smxc" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.153810 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.155569 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.163858 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-scripts\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.210049 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-v9spg"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.214519 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-db-sync-config-data\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.221166 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xqxr\" (UniqueName: \"kubernetes.io/projected/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-kube-api-access-4xqxr\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.233046 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-config-data\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.233176 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb4qn\" (UniqueName: \"kubernetes.io/projected/8555e0d7-6d06-4edb-b463-86f7bf829949-kube-api-access-wb4qn\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.233231 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8555e0d7-6d06-4edb-b463-86f7bf829949-logs\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.233252 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-combined-ca-bundle\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.233454 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-scripts\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.297260 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-64gs8"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.297708 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lwm4t" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.310315 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-rt7v2"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.311555 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rt7v2" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.320983 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-rrrpx" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.338014 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.339025 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-scripts\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.339109 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-config-data\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.341340 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb4qn\" (UniqueName: \"kubernetes.io/projected/8555e0d7-6d06-4edb-b463-86f7bf829949-kube-api-access-wb4qn\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.341393 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8555e0d7-6d06-4edb-b463-86f7bf829949-logs\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.341408 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-combined-ca-bundle\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.342734 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8555e0d7-6d06-4edb-b463-86f7bf829949-logs\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.351796 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-combined-ca-bundle\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.353770 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-config-data\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.354030 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-scripts\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.364658 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.372045 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-rt7v2"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.381058 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb4qn\" (UniqueName: \"kubernetes.io/projected/8555e0d7-6d06-4edb-b463-86f7bf829949-kube-api-access-wb4qn\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.389102 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-5x59j"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.390846 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.423141 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-5x59j"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.436070 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.443991 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tbq8\" (UniqueName: \"kubernetes.io/projected/a90f6a78-a996-49f8-a567-d2699c737d1f-kube-api-access-6tbq8\") pod \"barbican-db-sync-rt7v2\" (UID: \"a90f6a78-a996-49f8-a567-d2699c737d1f\") " pod="openstack/barbican-db-sync-rt7v2" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.444165 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a90f6a78-a996-49f8-a567-d2699c737d1f-combined-ca-bundle\") pod \"barbican-db-sync-rt7v2\" (UID: \"a90f6a78-a996-49f8-a567-d2699c737d1f\") " pod="openstack/barbican-db-sync-rt7v2" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.444199 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a90f6a78-a996-49f8-a567-d2699c737d1f-db-sync-config-data\") pod \"barbican-db-sync-rt7v2\" (UID: \"a90f6a78-a996-49f8-a567-d2699c737d1f\") " pod="openstack/barbican-db-sync-rt7v2" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.445025 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.445389 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.449865 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.465314 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.468751 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.553722 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a90f6a78-a996-49f8-a567-d2699c737d1f-combined-ca-bundle\") pod \"barbican-db-sync-rt7v2\" (UID: \"a90f6a78-a996-49f8-a567-d2699c737d1f\") " pod="openstack/barbican-db-sync-rt7v2" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554010 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554035 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a90f6a78-a996-49f8-a567-d2699c737d1f-db-sync-config-data\") pod \"barbican-db-sync-rt7v2\" (UID: \"a90f6a78-a996-49f8-a567-d2699c737d1f\") " pod="openstack/barbican-db-sync-rt7v2" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554052 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-config\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554081 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-log-httpd\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554102 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554133 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-run-httpd\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554169 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tbq8\" (UniqueName: \"kubernetes.io/projected/a90f6a78-a996-49f8-a567-d2699c737d1f-kube-api-access-6tbq8\") pod \"barbican-db-sync-rt7v2\" (UID: \"a90f6a78-a996-49f8-a567-d2699c737d1f\") " pod="openstack/barbican-db-sync-rt7v2" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554194 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x98w\" (UniqueName: \"kubernetes.io/projected/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-kube-api-access-6x98w\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554216 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-config-data\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554249 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-scripts\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554270 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554312 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554364 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554386 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554418 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc98s\" (UniqueName: \"kubernetes.io/projected/714bda18-396a-4c61-b32c-28c97f9212c7-kube-api-access-pc98s\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.563974 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a90f6a78-a996-49f8-a567-d2699c737d1f-combined-ca-bundle\") pod \"barbican-db-sync-rt7v2\" (UID: \"a90f6a78-a996-49f8-a567-d2699c737d1f\") " pod="openstack/barbican-db-sync-rt7v2" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.569481 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a90f6a78-a996-49f8-a567-d2699c737d1f-db-sync-config-data\") pod \"barbican-db-sync-rt7v2\" (UID: \"a90f6a78-a996-49f8-a567-d2699c737d1f\") " pod="openstack/barbican-db-sync-rt7v2" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.584234 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tbq8\" (UniqueName: \"kubernetes.io/projected/a90f6a78-a996-49f8-a567-d2699c737d1f-kube-api-access-6tbq8\") pod \"barbican-db-sync-rt7v2\" (UID: \"a90f6a78-a996-49f8-a567-d2699c737d1f\") " pod="openstack/barbican-db-sync-rt7v2" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.636665 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-64gs8"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.647526 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rt7v2" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.670245 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.670356 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.670385 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.670433 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc98s\" (UniqueName: \"kubernetes.io/projected/714bda18-396a-4c61-b32c-28c97f9212c7-kube-api-access-pc98s\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.670481 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.670511 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-config\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.670545 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-log-httpd\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.670573 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.670606 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-run-httpd\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.670669 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x98w\" (UniqueName: \"kubernetes.io/projected/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-kube-api-access-6x98w\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.670698 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-config-data\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.670731 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-scripts\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.670760 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.673585 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.674216 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.676849 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-log-httpd\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.684415 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.689062 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-config\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.689592 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-run-httpd\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.689633 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.689651 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.690175 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.696439 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.698753 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.703201 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.703442 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.704944 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.715866 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x98w\" (UniqueName: \"kubernetes.io/projected/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-kube-api-access-6x98w\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.725602 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qdcvb" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.727450 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-config-data\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.728185 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc98s\" (UniqueName: \"kubernetes.io/projected/714bda18-396a-4c61-b32c-28c97f9212c7-kube-api-access-pc98s\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.737233 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.768633 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.770393 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.792255 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.792527 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.795716 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-scripts\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.796001 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878391 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878450 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ae4a36d-9cf4-40cd-aa7a-6da368242040-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878476 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ae4a36d-9cf4-40cd-aa7a-6da368242040-logs\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878492 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878526 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/216704d4-5c21-497f-95b7-1e882daec251-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878562 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7sv4\" (UniqueName: \"kubernetes.io/projected/4ae4a36d-9cf4-40cd-aa7a-6da368242040-kube-api-access-v7sv4\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878613 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878642 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878707 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878754 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878769 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79rgr\" (UniqueName: \"kubernetes.io/projected/216704d4-5c21-497f-95b7-1e882daec251-kube-api-access-79rgr\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878793 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878808 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/216704d4-5c21-497f-95b7-1e882daec251-logs\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878861 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878885 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-scripts\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878901 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-config-data\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.880733 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tpqxs"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982316 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ae4a36d-9cf4-40cd-aa7a-6da368242040-logs\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982370 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982409 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/216704d4-5c21-497f-95b7-1e882daec251-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982443 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7sv4\" (UniqueName: \"kubernetes.io/projected/4ae4a36d-9cf4-40cd-aa7a-6da368242040-kube-api-access-v7sv4\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982484 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982512 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982562 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982598 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982613 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79rgr\" (UniqueName: \"kubernetes.io/projected/216704d4-5c21-497f-95b7-1e882daec251-kube-api-access-79rgr\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982631 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982645 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/216704d4-5c21-497f-95b7-1e882daec251-logs\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982688 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982708 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-scripts\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982722 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-config-data\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982758 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982780 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ae4a36d-9cf4-40cd-aa7a-6da368242040-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982832 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ae4a36d-9cf4-40cd-aa7a-6da368242040-logs\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.983508 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ae4a36d-9cf4-40cd-aa7a-6da368242040-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.987756 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/216704d4-5c21-497f-95b7-1e882daec251-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.988362 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/216704d4-5c21-497f-95b7-1e882daec251-logs\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.995197 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.995234 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1439638cb8026f3fbd74a1d30ab35170ee3b35899e999b31e76311ef8605b4f/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.995922 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.996027 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bd5683b2fac8da06378b2d5eb72c7d0b6faa54e75d4b318b8013499a38483353/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.001991 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.002767 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.011255 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7sv4\" (UniqueName: \"kubernetes.io/projected/4ae4a36d-9cf4-40cd-aa7a-6da368242040-kube-api-access-v7sv4\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.028529 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.028691 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.028773 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.029102 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-config-data\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.033454 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.036215 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.037036 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-scripts\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.043307 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79rgr\" (UniqueName: \"kubernetes.io/projected/216704d4-5c21-497f-95b7-1e882daec251-kube-api-access-79rgr\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.076073 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.098893 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.102610 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-npwgd"] Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.111019 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.145008 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.211297 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.260107 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-npwgd" event={"ID":"1051dd3c-5d30-47f1-8162-3a3e9d5ee271","Type":"ContainerStarted","Data":"61ca5d5ef115983d440cf3f223c2366b80c380ae13e04f479bd30ca5a18ae1d4"} Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.266765 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" event={"ID":"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a","Type":"ContainerStarted","Data":"f03447e044ec076808966e68051e046395f76c21c7d74f8ecac4fe5c4986784d"} Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.275550 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tpqxs" event={"ID":"a83a8c4a-677e-4481-b671-f5fa6edadb5f","Type":"ContainerStarted","Data":"735b101ef3e18e05703b0e39bfc247d4eafc863fa5ee869699895513af302ba8"} Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.477454 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-lwm4t"] Jan 30 21:37:39 crc kubenswrapper[4751]: W0130 21:37:39.506845 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd42b4031_ca3e_4b28_b62a_eb346132dc3a.slice/crio-8fffc2cd09b32d538d073fe7efb076cbe727bad01ceac9607f3b182ea08e3707 WatchSource:0}: Error finding container 8fffc2cd09b32d538d073fe7efb076cbe727bad01ceac9607f3b182ea08e3707: Status 404 returned error can't find the container with id 8fffc2cd09b32d538d073fe7efb076cbe727bad01ceac9607f3b182ea08e3707 Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.553274 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-bq6lp"] Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.564268 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-v9spg"] Jan 30 21:37:39 crc kubenswrapper[4751]: W0130 21:37:39.658420 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda90f6a78_a996_49f8_a567_d2699c737d1f.slice/crio-0da3c4e319786c371f66bda236269e4c334ffccdd92bab472a1f2cb2958a901e WatchSource:0}: Error finding container 0da3c4e319786c371f66bda236269e4c334ffccdd92bab472a1f2cb2958a901e: Status 404 returned error can't find the container with id 0da3c4e319786c371f66bda236269e4c334ffccdd92bab472a1f2cb2958a901e Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.658899 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-rt7v2"] Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.778631 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-5x59j"] Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.952502 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.084517 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.176420 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.278394 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.296529 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" event={"ID":"714bda18-396a-4c61-b32c-28c97f9212c7","Type":"ContainerStarted","Data":"ef63bfb279b0f69282ea04ec8633731532edcb149df478089ae1d0918490a1d0"} Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.296584 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" event={"ID":"714bda18-396a-4c61-b32c-28c97f9212c7","Type":"ContainerStarted","Data":"d95ab28a617fe672cc0a279ffe8dbf4ffcdc0187184b974eff4f150e1720495f"} Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.306302 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lwm4t" event={"ID":"d42b4031-ca3e-4b28-b62a-eb346132dc3a","Type":"ContainerStarted","Data":"3f232e6698625cc60ad1770425a3662d4b2453997f82a2581cabc9a30c379df0"} Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.306357 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lwm4t" event={"ID":"d42b4031-ca3e-4b28-b62a-eb346132dc3a","Type":"ContainerStarted","Data":"8fffc2cd09b32d538d073fe7efb076cbe727bad01ceac9607f3b182ea08e3707"} Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.318840 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v9spg" event={"ID":"8555e0d7-6d06-4edb-b463-86f7bf829949","Type":"ContainerStarted","Data":"7502198a116b3f2771f1ab3c57c8008044d28b3101423ffa202d372d5ac52b80"} Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.333070 4751 generic.go:334] "Generic (PLEG): container finished" podID="cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a" containerID="45febbc6f670463da13688cdf32eacffd29dea9d71d9b8485fc96c2e17071d4a" exitCode=0 Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.333129 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" event={"ID":"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a","Type":"ContainerDied","Data":"45febbc6f670463da13688cdf32eacffd29dea9d71d9b8485fc96c2e17071d4a"} Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.334702 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-bq6lp" event={"ID":"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24","Type":"ContainerStarted","Data":"d48985825eedc61af18140d62898c9c9236f51e33569add314f3a9440bbd00d5"} Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.339544 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rt7v2" event={"ID":"a90f6a78-a996-49f8-a567-d2699c737d1f","Type":"ContainerStarted","Data":"0da3c4e319786c371f66bda236269e4c334ffccdd92bab472a1f2cb2958a901e"} Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.399186 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tpqxs" event={"ID":"a83a8c4a-677e-4481-b671-f5fa6edadb5f","Type":"ContainerStarted","Data":"7254994e57fe71a7702af83ba12ef3a837f896f1d6e6e6a7dbba9ca54cdfc1ad"} Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.414378 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36866d1c-b1a0-4d3e-a87f-f5901b053bb5","Type":"ContainerStarted","Data":"f50cd608a1a18449e64efb07caa3e3fd54b436a099efe0495f393f4382e9ab10"} Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.433178 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-lwm4t" podStartSLOduration=3.433160725 podStartE2EDuration="3.433160725s" podCreationTimestamp="2026-01-30 21:37:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:37:40.396880407 +0000 UTC m=+1399.142703056" watchObservedRunningTime="2026-01-30 21:37:40.433160725 +0000 UTC m=+1399.178983374" Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.489588 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-tpqxs" podStartSLOduration=3.48956547 podStartE2EDuration="3.48956547s" podCreationTimestamp="2026-01-30 21:37:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:37:40.475877665 +0000 UTC m=+1399.221700314" watchObservedRunningTime="2026-01-30 21:37:40.48956547 +0000 UTC m=+1399.235388119" Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.551024 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.748455 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:37:40 crc kubenswrapper[4751]: W0130 21:37:40.753987 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ae4a36d_9cf4_40cd_aa7a_6da368242040.slice/crio-753c93823a40b1ea10f438daff89be405292baad01f508a3efec7b992c676a97 WatchSource:0}: Error finding container 753c93823a40b1ea10f438daff89be405292baad01f508a3efec7b992c676a97: Status 404 returned error can't find the container with id 753c93823a40b1ea10f438daff89be405292baad01f508a3efec7b992c676a97 Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.339492 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.438281 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-dns-swift-storage-0\") pod \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.438481 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-config\") pod \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.438518 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-ovsdbserver-nb\") pod \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.438556 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-ovsdbserver-sb\") pod \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.438690 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtdkx\" (UniqueName: \"kubernetes.io/projected/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-kube-api-access-rtdkx\") pod \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.438768 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-dns-svc\") pod \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.457686 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-kube-api-access-rtdkx" (OuterVolumeSpecName: "kube-api-access-rtdkx") pod "cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a" (UID: "cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a"). InnerVolumeSpecName "kube-api-access-rtdkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.481107 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a" (UID: "cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.484383 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-config" (OuterVolumeSpecName: "config") pod "cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a" (UID: "cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.506018 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a" (UID: "cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.506888 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.507022 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" event={"ID":"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a","Type":"ContainerDied","Data":"f03447e044ec076808966e68051e046395f76c21c7d74f8ecac4fe5c4986784d"} Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.507091 4751 scope.go:117] "RemoveContainer" containerID="45febbc6f670463da13688cdf32eacffd29dea9d71d9b8485fc96c2e17071d4a" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.508183 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a" (UID: "cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.510184 4751 generic.go:334] "Generic (PLEG): container finished" podID="714bda18-396a-4c61-b32c-28c97f9212c7" containerID="ef63bfb279b0f69282ea04ec8633731532edcb149df478089ae1d0918490a1d0" exitCode=0 Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.510234 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" event={"ID":"714bda18-396a-4c61-b32c-28c97f9212c7","Type":"ContainerDied","Data":"ef63bfb279b0f69282ea04ec8633731532edcb149df478089ae1d0918490a1d0"} Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.514232 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a" (UID: "cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.520287 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"216704d4-5c21-497f-95b7-1e882daec251","Type":"ContainerStarted","Data":"e11c96cff5a330d983c0e1dc8367150a90bc8159e4fde7a3ae81c0c2e9080bec"} Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.542167 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ae4a36d-9cf4-40cd-aa7a-6da368242040","Type":"ContainerStarted","Data":"753c93823a40b1ea10f438daff89be405292baad01f508a3efec7b992c676a97"} Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.545726 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.545758 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.545767 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.545778 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.545788 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtdkx\" (UniqueName: \"kubernetes.io/projected/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-kube-api-access-rtdkx\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.545796 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.940534 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-64gs8"] Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.950007 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-64gs8"] Jan 30 21:37:42 crc kubenswrapper[4751]: I0130 21:37:42.005142 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a" path="/var/lib/kubelet/pods/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a/volumes" Jan 30 21:37:42 crc kubenswrapper[4751]: I0130 21:37:42.570441 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" event={"ID":"714bda18-396a-4c61-b32c-28c97f9212c7","Type":"ContainerStarted","Data":"d75cd44bc174bd1c8fb960d6b48079304f29a447af1f96a3c4feb1e101ec22b3"} Jan 30 21:37:42 crc kubenswrapper[4751]: I0130 21:37:42.570977 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:42 crc kubenswrapper[4751]: I0130 21:37:42.574992 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"216704d4-5c21-497f-95b7-1e882daec251","Type":"ContainerStarted","Data":"119884ecc859ddc20e43a24694f7a4c243d3d0650e6821ce6c5c66516d15e09a"} Jan 30 21:37:42 crc kubenswrapper[4751]: I0130 21:37:42.587079 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ae4a36d-9cf4-40cd-aa7a-6da368242040","Type":"ContainerStarted","Data":"8c7c23c935d0608587f83d483f2b18799c5f2ff038edce5ec3fb7b909b434972"} Jan 30 21:37:42 crc kubenswrapper[4751]: I0130 21:37:42.601896 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" podStartSLOduration=4.601881721 podStartE2EDuration="4.601881721s" podCreationTimestamp="2026-01-30 21:37:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:37:42.600862294 +0000 UTC m=+1401.346684943" watchObservedRunningTime="2026-01-30 21:37:42.601881721 +0000 UTC m=+1401.347704370" Jan 30 21:37:44 crc kubenswrapper[4751]: I0130 21:37:44.620841 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"216704d4-5c21-497f-95b7-1e882daec251","Type":"ContainerStarted","Data":"f55630451a66194662e90f8ccee31ad40c7dd8e161f684fe03ebc06b15f136e8"} Jan 30 21:37:44 crc kubenswrapper[4751]: I0130 21:37:44.620934 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="216704d4-5c21-497f-95b7-1e882daec251" containerName="glance-log" containerID="cri-o://119884ecc859ddc20e43a24694f7a4c243d3d0650e6821ce6c5c66516d15e09a" gracePeriod=30 Jan 30 21:37:44 crc kubenswrapper[4751]: I0130 21:37:44.621028 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="216704d4-5c21-497f-95b7-1e882daec251" containerName="glance-httpd" containerID="cri-o://f55630451a66194662e90f8ccee31ad40c7dd8e161f684fe03ebc06b15f136e8" gracePeriod=30 Jan 30 21:37:44 crc kubenswrapper[4751]: I0130 21:37:44.623037 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ae4a36d-9cf4-40cd-aa7a-6da368242040","Type":"ContainerStarted","Data":"4c18d2acdcf0687ee372f2e7681595403ba57b693482cb625c56dc4593cc7152"} Jan 30 21:37:44 crc kubenswrapper[4751]: I0130 21:37:44.623191 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4ae4a36d-9cf4-40cd-aa7a-6da368242040" containerName="glance-log" containerID="cri-o://8c7c23c935d0608587f83d483f2b18799c5f2ff038edce5ec3fb7b909b434972" gracePeriod=30 Jan 30 21:37:44 crc kubenswrapper[4751]: I0130 21:37:44.623260 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4ae4a36d-9cf4-40cd-aa7a-6da368242040" containerName="glance-httpd" containerID="cri-o://4c18d2acdcf0687ee372f2e7681595403ba57b693482cb625c56dc4593cc7152" gracePeriod=30 Jan 30 21:37:44 crc kubenswrapper[4751]: I0130 21:37:44.646629 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.646589298 podStartE2EDuration="7.646589298s" podCreationTimestamp="2026-01-30 21:37:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:37:44.645063077 +0000 UTC m=+1403.390885726" watchObservedRunningTime="2026-01-30 21:37:44.646589298 +0000 UTC m=+1403.392411957" Jan 30 21:37:44 crc kubenswrapper[4751]: I0130 21:37:44.674627 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.674604616 podStartE2EDuration="7.674604616s" podCreationTimestamp="2026-01-30 21:37:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:37:44.672173781 +0000 UTC m=+1403.417996430" watchObservedRunningTime="2026-01-30 21:37:44.674604616 +0000 UTC m=+1403.420427265" Jan 30 21:37:45 crc kubenswrapper[4751]: I0130 21:37:45.638917 4751 generic.go:334] "Generic (PLEG): container finished" podID="a83a8c4a-677e-4481-b671-f5fa6edadb5f" containerID="7254994e57fe71a7702af83ba12ef3a837f896f1d6e6e6a7dbba9ca54cdfc1ad" exitCode=0 Jan 30 21:37:45 crc kubenswrapper[4751]: I0130 21:37:45.639132 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tpqxs" event={"ID":"a83a8c4a-677e-4481-b671-f5fa6edadb5f","Type":"ContainerDied","Data":"7254994e57fe71a7702af83ba12ef3a837f896f1d6e6e6a7dbba9ca54cdfc1ad"} Jan 30 21:37:45 crc kubenswrapper[4751]: I0130 21:37:45.642649 4751 generic.go:334] "Generic (PLEG): container finished" podID="216704d4-5c21-497f-95b7-1e882daec251" containerID="f55630451a66194662e90f8ccee31ad40c7dd8e161f684fe03ebc06b15f136e8" exitCode=0 Jan 30 21:37:45 crc kubenswrapper[4751]: I0130 21:37:45.642672 4751 generic.go:334] "Generic (PLEG): container finished" podID="216704d4-5c21-497f-95b7-1e882daec251" containerID="119884ecc859ddc20e43a24694f7a4c243d3d0650e6821ce6c5c66516d15e09a" exitCode=143 Jan 30 21:37:45 crc kubenswrapper[4751]: I0130 21:37:45.642703 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"216704d4-5c21-497f-95b7-1e882daec251","Type":"ContainerDied","Data":"f55630451a66194662e90f8ccee31ad40c7dd8e161f684fe03ebc06b15f136e8"} Jan 30 21:37:45 crc kubenswrapper[4751]: I0130 21:37:45.642723 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"216704d4-5c21-497f-95b7-1e882daec251","Type":"ContainerDied","Data":"119884ecc859ddc20e43a24694f7a4c243d3d0650e6821ce6c5c66516d15e09a"} Jan 30 21:37:45 crc kubenswrapper[4751]: I0130 21:37:45.645019 4751 generic.go:334] "Generic (PLEG): container finished" podID="4ae4a36d-9cf4-40cd-aa7a-6da368242040" containerID="4c18d2acdcf0687ee372f2e7681595403ba57b693482cb625c56dc4593cc7152" exitCode=0 Jan 30 21:37:45 crc kubenswrapper[4751]: I0130 21:37:45.645045 4751 generic.go:334] "Generic (PLEG): container finished" podID="4ae4a36d-9cf4-40cd-aa7a-6da368242040" containerID="8c7c23c935d0608587f83d483f2b18799c5f2ff038edce5ec3fb7b909b434972" exitCode=143 Jan 30 21:37:45 crc kubenswrapper[4751]: I0130 21:37:45.645065 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ae4a36d-9cf4-40cd-aa7a-6da368242040","Type":"ContainerDied","Data":"4c18d2acdcf0687ee372f2e7681595403ba57b693482cb625c56dc4593cc7152"} Jan 30 21:37:45 crc kubenswrapper[4751]: I0130 21:37:45.645088 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ae4a36d-9cf4-40cd-aa7a-6da368242040","Type":"ContainerDied","Data":"8c7c23c935d0608587f83d483f2b18799c5f2ff038edce5ec3fb7b909b434972"} Jan 30 21:37:47 crc kubenswrapper[4751]: I0130 21:37:47.036928 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:47 crc kubenswrapper[4751]: I0130 21:37:47.047848 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:47 crc kubenswrapper[4751]: I0130 21:37:47.669671 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:49 crc kubenswrapper[4751]: I0130 21:37:49.030490 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:49 crc kubenswrapper[4751]: I0130 21:37:49.114090 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-jnhv7"] Jan 30 21:37:49 crc kubenswrapper[4751]: I0130 21:37:49.120795 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" podUID="37ac1bbe-c547-456d-8b0a-0c29a877775c" containerName="dnsmasq-dns" containerID="cri-o://5964e334ee213f037ca3d06aae948d2d9e897aa60cd6fd6594177910b8efb612" gracePeriod=10 Jan 30 21:37:49 crc kubenswrapper[4751]: E0130 21:37:49.370166 4751 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37ac1bbe_c547_456d_8b0a_0c29a877775c.slice/crio-5964e334ee213f037ca3d06aae948d2d9e897aa60cd6fd6594177910b8efb612.scope\": RecentStats: unable to find data in memory cache]" Jan 30 21:37:49 crc kubenswrapper[4751]: I0130 21:37:49.685984 4751 generic.go:334] "Generic (PLEG): container finished" podID="37ac1bbe-c547-456d-8b0a-0c29a877775c" containerID="5964e334ee213f037ca3d06aae948d2d9e897aa60cd6fd6594177910b8efb612" exitCode=0 Jan 30 21:37:49 crc kubenswrapper[4751]: I0130 21:37:49.686027 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" event={"ID":"37ac1bbe-c547-456d-8b0a-0c29a877775c","Type":"ContainerDied","Data":"5964e334ee213f037ca3d06aae948d2d9e897aa60cd6fd6594177910b8efb612"} Jan 30 21:37:52 crc kubenswrapper[4751]: I0130 21:37:52.600396 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" podUID="37ac1bbe-c547-456d-8b0a-0c29a877775c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: connect: connection refused" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.024793 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-586n4"] Jan 30 21:37:56 crc kubenswrapper[4751]: E0130 21:37:56.025777 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a" containerName="init" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.025792 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a" containerName="init" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.026017 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a" containerName="init" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.027834 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.063340 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-586n4"] Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.101567 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f42767ff-b1d3-49e9-8b8d-39c65ea98978-catalog-content\") pod \"redhat-operators-586n4\" (UID: \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\") " pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.101671 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjpj4\" (UniqueName: \"kubernetes.io/projected/f42767ff-b1d3-49e9-8b8d-39c65ea98978-kube-api-access-wjpj4\") pod \"redhat-operators-586n4\" (UID: \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\") " pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.101865 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f42767ff-b1d3-49e9-8b8d-39c65ea98978-utilities\") pod \"redhat-operators-586n4\" (UID: \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\") " pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.203164 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f42767ff-b1d3-49e9-8b8d-39c65ea98978-catalog-content\") pod \"redhat-operators-586n4\" (UID: \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\") " pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.203265 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjpj4\" (UniqueName: \"kubernetes.io/projected/f42767ff-b1d3-49e9-8b8d-39c65ea98978-kube-api-access-wjpj4\") pod \"redhat-operators-586n4\" (UID: \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\") " pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.203384 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f42767ff-b1d3-49e9-8b8d-39c65ea98978-utilities\") pod \"redhat-operators-586n4\" (UID: \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\") " pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.203671 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f42767ff-b1d3-49e9-8b8d-39c65ea98978-catalog-content\") pod \"redhat-operators-586n4\" (UID: \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\") " pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.203844 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f42767ff-b1d3-49e9-8b8d-39c65ea98978-utilities\") pod \"redhat-operators-586n4\" (UID: \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\") " pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.236666 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjpj4\" (UniqueName: \"kubernetes.io/projected/f42767ff-b1d3-49e9-8b8d-39c65ea98978-kube-api-access-wjpj4\") pod \"redhat-operators-586n4\" (UID: \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\") " pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.354748 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:37:57 crc kubenswrapper[4751]: I0130 21:37:57.601305 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" podUID="37ac1bbe-c547-456d-8b0a-0c29a877775c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: connect: connection refused" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.059190 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.065091 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.079165 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.165969 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-internal-tls-certs\") pod \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.166043 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ae4a36d-9cf4-40cd-aa7a-6da368242040-logs\") pod \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.166108 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-combined-ca-bundle\") pod \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.166125 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbsxf\" (UniqueName: \"kubernetes.io/projected/a83a8c4a-677e-4481-b671-f5fa6edadb5f-kube-api-access-sbsxf\") pod \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.166158 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-combined-ca-bundle\") pod \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.166185 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-credential-keys\") pod \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.166220 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ae4a36d-9cf4-40cd-aa7a-6da368242040-httpd-run\") pod \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.166262 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-scripts\") pod \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.167504 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-fernet-keys\") pod \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.167566 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-scripts\") pod \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.167604 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7sv4\" (UniqueName: \"kubernetes.io/projected/4ae4a36d-9cf4-40cd-aa7a-6da368242040-kube-api-access-v7sv4\") pod \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.167621 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-config-data\") pod \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.167670 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-config-data\") pod \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.167765 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.168362 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ae4a36d-9cf4-40cd-aa7a-6da368242040-logs" (OuterVolumeSpecName: "logs") pod "4ae4a36d-9cf4-40cd-aa7a-6da368242040" (UID: "4ae4a36d-9cf4-40cd-aa7a-6da368242040"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.168867 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ae4a36d-9cf4-40cd-aa7a-6da368242040-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4ae4a36d-9cf4-40cd-aa7a-6da368242040" (UID: "4ae4a36d-9cf4-40cd-aa7a-6da368242040"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.169500 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ae4a36d-9cf4-40cd-aa7a-6da368242040-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.169716 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ae4a36d-9cf4-40cd-aa7a-6da368242040-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.174185 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-scripts" (OuterVolumeSpecName: "scripts") pod "4ae4a36d-9cf4-40cd-aa7a-6da368242040" (UID: "4ae4a36d-9cf4-40cd-aa7a-6da368242040"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.174448 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a83a8c4a-677e-4481-b671-f5fa6edadb5f-kube-api-access-sbsxf" (OuterVolumeSpecName: "kube-api-access-sbsxf") pod "a83a8c4a-677e-4481-b671-f5fa6edadb5f" (UID: "a83a8c4a-677e-4481-b671-f5fa6edadb5f"). InnerVolumeSpecName "kube-api-access-sbsxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.174889 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a83a8c4a-677e-4481-b671-f5fa6edadb5f" (UID: "a83a8c4a-677e-4481-b671-f5fa6edadb5f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.175275 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ae4a36d-9cf4-40cd-aa7a-6da368242040-kube-api-access-v7sv4" (OuterVolumeSpecName: "kube-api-access-v7sv4") pod "4ae4a36d-9cf4-40cd-aa7a-6da368242040" (UID: "4ae4a36d-9cf4-40cd-aa7a-6da368242040"). InnerVolumeSpecName "kube-api-access-v7sv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.177921 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a83a8c4a-677e-4481-b671-f5fa6edadb5f" (UID: "a83a8c4a-677e-4481-b671-f5fa6edadb5f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.205485 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-scripts" (OuterVolumeSpecName: "scripts") pod "a83a8c4a-677e-4481-b671-f5fa6edadb5f" (UID: "a83a8c4a-677e-4481-b671-f5fa6edadb5f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.231580 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ae4a36d-9cf4-40cd-aa7a-6da368242040" (UID: "4ae4a36d-9cf4-40cd-aa7a-6da368242040"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.236700 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-config-data" (OuterVolumeSpecName: "config-data") pod "a83a8c4a-677e-4481-b671-f5fa6edadb5f" (UID: "a83a8c4a-677e-4481-b671-f5fa6edadb5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.238263 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227" (OuterVolumeSpecName: "glance") pod "4ae4a36d-9cf4-40cd-aa7a-6da368242040" (UID: "4ae4a36d-9cf4-40cd-aa7a-6da368242040"). InnerVolumeSpecName "pvc-2b6fe968-3470-4548-ade6-9a3644e74227". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.258772 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a83a8c4a-677e-4481-b671-f5fa6edadb5f" (UID: "a83a8c4a-677e-4481-b671-f5fa6edadb5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.272465 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"216704d4-5c21-497f-95b7-1e882daec251\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.272549 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-config-data\") pod \"216704d4-5c21-497f-95b7-1e882daec251\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.274750 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79rgr\" (UniqueName: \"kubernetes.io/projected/216704d4-5c21-497f-95b7-1e882daec251-kube-api-access-79rgr\") pod \"216704d4-5c21-497f-95b7-1e882daec251\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.274847 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-scripts\") pod \"216704d4-5c21-497f-95b7-1e882daec251\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.275673 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/216704d4-5c21-497f-95b7-1e882daec251-logs\") pod \"216704d4-5c21-497f-95b7-1e882daec251\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.275724 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-public-tls-certs\") pod \"216704d4-5c21-497f-95b7-1e882daec251\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.275766 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/216704d4-5c21-497f-95b7-1e882daec251-httpd-run\") pod \"216704d4-5c21-497f-95b7-1e882daec251\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.276498 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/216704d4-5c21-497f-95b7-1e882daec251-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "216704d4-5c21-497f-95b7-1e882daec251" (UID: "216704d4-5c21-497f-95b7-1e882daec251"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.276722 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/216704d4-5c21-497f-95b7-1e882daec251-logs" (OuterVolumeSpecName: "logs") pod "216704d4-5c21-497f-95b7-1e882daec251" (UID: "216704d4-5c21-497f-95b7-1e882daec251"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.276849 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-combined-ca-bundle\") pod \"216704d4-5c21-497f-95b7-1e882daec251\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.278271 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7sv4\" (UniqueName: \"kubernetes.io/projected/4ae4a36d-9cf4-40cd-aa7a-6da368242040-kube-api-access-v7sv4\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.278290 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.278354 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") on node \"crc\" " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.278367 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/216704d4-5c21-497f-95b7-1e882daec251-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.278378 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/216704d4-5c21-497f-95b7-1e882daec251-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.278429 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.278440 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbsxf\" (UniqueName: \"kubernetes.io/projected/a83a8c4a-677e-4481-b671-f5fa6edadb5f-kube-api-access-sbsxf\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.278449 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.278458 4751 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.280735 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.280754 4751 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.280762 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.283511 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-scripts" (OuterVolumeSpecName: "scripts") pod "216704d4-5c21-497f-95b7-1e882daec251" (UID: "216704d4-5c21-497f-95b7-1e882daec251"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.283580 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/216704d4-5c21-497f-95b7-1e882daec251-kube-api-access-79rgr" (OuterVolumeSpecName: "kube-api-access-79rgr") pod "216704d4-5c21-497f-95b7-1e882daec251" (UID: "216704d4-5c21-497f-95b7-1e882daec251"). InnerVolumeSpecName "kube-api-access-79rgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.304531 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-config-data" (OuterVolumeSpecName: "config-data") pod "4ae4a36d-9cf4-40cd-aa7a-6da368242040" (UID: "4ae4a36d-9cf4-40cd-aa7a-6da368242040"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.308581 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a" (OuterVolumeSpecName: "glance") pod "216704d4-5c21-497f-95b7-1e882daec251" (UID: "216704d4-5c21-497f-95b7-1e882daec251"). InnerVolumeSpecName "pvc-03216ddc-ff0c-4c63-8e03-12380926233a". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.311344 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4ae4a36d-9cf4-40cd-aa7a-6da368242040" (UID: "4ae4a36d-9cf4-40cd-aa7a-6da368242040"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.319888 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "216704d4-5c21-497f-95b7-1e882daec251" (UID: "216704d4-5c21-497f-95b7-1e882daec251"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.328030 4751 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.328179 4751 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2b6fe968-3470-4548-ade6-9a3644e74227" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227") on node "crc" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.338937 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "216704d4-5c21-497f-95b7-1e882daec251" (UID: "216704d4-5c21-497f-95b7-1e882daec251"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.346614 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-config-data" (OuterVolumeSpecName: "config-data") pod "216704d4-5c21-497f-95b7-1e882daec251" (UID: "216704d4-5c21-497f-95b7-1e882daec251"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.382763 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") on node \"crc\" " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.382806 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.382818 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.382828 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79rgr\" (UniqueName: \"kubernetes.io/projected/216704d4-5c21-497f-95b7-1e882daec251-kube-api-access-79rgr\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.382847 4751 reconciler_common.go:293] "Volume detached for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.382858 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.382867 4751 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.382874 4751 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.382883 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.406169 4751 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.406413 4751 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-03216ddc-ff0c-4c63-8e03-12380926233a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a") on node "crc" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.483835 4751 reconciler_common.go:293] "Volume detached for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.803307 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ae4a36d-9cf4-40cd-aa7a-6da368242040","Type":"ContainerDied","Data":"753c93823a40b1ea10f438daff89be405292baad01f508a3efec7b992c676a97"} Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.803381 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.803404 4751 scope.go:117] "RemoveContainer" containerID="4c18d2acdcf0687ee372f2e7681595403ba57b693482cb625c56dc4593cc7152" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.813041 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.817931 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tpqxs" event={"ID":"a83a8c4a-677e-4481-b671-f5fa6edadb5f","Type":"ContainerDied","Data":"735b101ef3e18e05703b0e39bfc247d4eafc863fa5ee869699895513af302ba8"} Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.817989 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="735b101ef3e18e05703b0e39bfc247d4eafc863fa5ee869699895513af302ba8" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.828687 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"216704d4-5c21-497f-95b7-1e882daec251","Type":"ContainerDied","Data":"e11c96cff5a330d983c0e1dc8367150a90bc8159e4fde7a3ae81c0c2e9080bec"} Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.828786 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.854193 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.862709 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.894637 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:37:59 crc kubenswrapper[4751]: E0130 21:37:59.895157 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216704d4-5c21-497f-95b7-1e882daec251" containerName="glance-httpd" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.895177 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="216704d4-5c21-497f-95b7-1e882daec251" containerName="glance-httpd" Jan 30 21:37:59 crc kubenswrapper[4751]: E0130 21:37:59.895200 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a83a8c4a-677e-4481-b671-f5fa6edadb5f" containerName="keystone-bootstrap" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.895217 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a83a8c4a-677e-4481-b671-f5fa6edadb5f" containerName="keystone-bootstrap" Jan 30 21:37:59 crc kubenswrapper[4751]: E0130 21:37:59.895237 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216704d4-5c21-497f-95b7-1e882daec251" containerName="glance-log" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.895244 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="216704d4-5c21-497f-95b7-1e882daec251" containerName="glance-log" Jan 30 21:37:59 crc kubenswrapper[4751]: E0130 21:37:59.895258 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae4a36d-9cf4-40cd-aa7a-6da368242040" containerName="glance-httpd" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.895266 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae4a36d-9cf4-40cd-aa7a-6da368242040" containerName="glance-httpd" Jan 30 21:37:59 crc kubenswrapper[4751]: E0130 21:37:59.895274 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae4a36d-9cf4-40cd-aa7a-6da368242040" containerName="glance-log" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.895280 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae4a36d-9cf4-40cd-aa7a-6da368242040" containerName="glance-log" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.895500 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a83a8c4a-677e-4481-b671-f5fa6edadb5f" containerName="keystone-bootstrap" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.895524 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="216704d4-5c21-497f-95b7-1e882daec251" containerName="glance-httpd" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.895533 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="216704d4-5c21-497f-95b7-1e882daec251" containerName="glance-log" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.895545 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ae4a36d-9cf4-40cd-aa7a-6da368242040" containerName="glance-log" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.895557 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ae4a36d-9cf4-40cd-aa7a-6da368242040" containerName="glance-httpd" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.896875 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.901344 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.901591 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.902132 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qdcvb" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.902244 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.912172 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.923172 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.932637 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.938237 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.940425 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.945634 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.946087 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.949818 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.009457 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="216704d4-5c21-497f-95b7-1e882daec251" path="/var/lib/kubelet/pods/216704d4-5c21-497f-95b7-1e882daec251/volumes" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.011156 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ae4a36d-9cf4-40cd-aa7a-6da368242040" path="/var/lib/kubelet/pods/4ae4a36d-9cf4-40cd-aa7a-6da368242040/volumes" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.105623 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-config-data\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.106471 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33588f5e-9224-4dd6-b689-0651c16d06bd-logs\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.106596 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.106676 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58e79616-9b52-47f9-a43e-01cbd487fbbd-logs\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.106753 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.106825 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xqhk\" (UniqueName: \"kubernetes.io/projected/58e79616-9b52-47f9-a43e-01cbd487fbbd-kube-api-access-4xqhk\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.106997 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.107089 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.107165 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/58e79616-9b52-47f9-a43e-01cbd487fbbd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.107280 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.107378 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-scripts\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.107470 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.107605 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.107681 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfdlg\" (UniqueName: \"kubernetes.io/projected/33588f5e-9224-4dd6-b689-0651c16d06bd-kube-api-access-mfdlg\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.107757 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33588f5e-9224-4dd6-b689-0651c16d06bd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.107891 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.211850 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.212079 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58e79616-9b52-47f9-a43e-01cbd487fbbd-logs\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.212176 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.212270 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xqhk\" (UniqueName: \"kubernetes.io/projected/58e79616-9b52-47f9-a43e-01cbd487fbbd-kube-api-access-4xqhk\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.212407 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.212533 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.212637 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/58e79616-9b52-47f9-a43e-01cbd487fbbd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.212788 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.212877 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-scripts\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.213001 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.213136 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.213226 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfdlg\" (UniqueName: \"kubernetes.io/projected/33588f5e-9224-4dd6-b689-0651c16d06bd-kube-api-access-mfdlg\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.213317 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33588f5e-9224-4dd6-b689-0651c16d06bd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.213486 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.213600 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-config-data\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.213682 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33588f5e-9224-4dd6-b689-0651c16d06bd-logs\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.214362 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33588f5e-9224-4dd6-b689-0651c16d06bd-logs\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.222276 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.226054 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58e79616-9b52-47f9-a43e-01cbd487fbbd-logs\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.226137 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.226458 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33588f5e-9224-4dd6-b689-0651c16d06bd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.227560 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/58e79616-9b52-47f9-a43e-01cbd487fbbd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.235081 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.235651 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-config-data\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.235988 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.236372 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-scripts\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.238946 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.250043 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.253015 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfdlg\" (UniqueName: \"kubernetes.io/projected/33588f5e-9224-4dd6-b689-0651c16d06bd-kube-api-access-mfdlg\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.265234 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xqhk\" (UniqueName: \"kubernetes.io/projected/58e79616-9b52-47f9-a43e-01cbd487fbbd-kube-api-access-4xqhk\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.307395 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-tpqxs"] Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.324079 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-tpqxs"] Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.349588 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.349633 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bd5683b2fac8da06378b2d5eb72c7d0b6faa54e75d4b318b8013499a38483353/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.355613 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.355660 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1439638cb8026f3fbd74a1d30ab35170ee3b35899e999b31e76311ef8605b4f/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.412278 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-89chj"] Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.413697 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.415203 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.421353 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.421483 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-89chj"] Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.421532 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.421672 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.421716 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6zjrt" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.421685 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.424536 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.518591 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqlp9\" (UniqueName: \"kubernetes.io/projected/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-kube-api-access-vqlp9\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.518647 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-combined-ca-bundle\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.518775 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-scripts\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.518939 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-credential-keys\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.519143 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-fernet-keys\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.519447 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-config-data\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.587155 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.603366 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.621532 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-fernet-keys\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.621640 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-config-data\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.621672 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqlp9\" (UniqueName: \"kubernetes.io/projected/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-kube-api-access-vqlp9\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.621711 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-combined-ca-bundle\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.621755 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-scripts\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.621819 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-credential-keys\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.627155 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-credential-keys\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.627560 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-config-data\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.628337 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-scripts\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.628397 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-fernet-keys\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.628668 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-combined-ca-bundle\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.643885 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqlp9\" (UniqueName: \"kubernetes.io/projected/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-kube-api-access-vqlp9\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.785319 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:01 crc kubenswrapper[4751]: I0130 21:38:01.991583 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a83a8c4a-677e-4481-b671-f5fa6edadb5f" path="/var/lib/kubelet/pods/a83a8c4a-677e-4481-b671-f5fa6edadb5f/volumes" Jan 30 21:38:07 crc kubenswrapper[4751]: I0130 21:38:07.600823 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" podUID="37ac1bbe-c547-456d-8b0a-0c29a877775c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: i/o timeout" Jan 30 21:38:07 crc kubenswrapper[4751]: I0130 21:38:07.601723 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:38:07 crc kubenswrapper[4751]: I0130 21:38:07.913971 4751 generic.go:334] "Generic (PLEG): container finished" podID="d42b4031-ca3e-4b28-b62a-eb346132dc3a" containerID="3f232e6698625cc60ad1770425a3662d4b2453997f82a2581cabc9a30c379df0" exitCode=0 Jan 30 21:38:07 crc kubenswrapper[4751]: I0130 21:38:07.914017 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lwm4t" event={"ID":"d42b4031-ca3e-4b28-b62a-eb346132dc3a","Type":"ContainerDied","Data":"3f232e6698625cc60ad1770425a3662d4b2453997f82a2581cabc9a30c379df0"} Jan 30 21:38:08 crc kubenswrapper[4751]: E0130 21:38:08.624530 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 30 21:38:08 crc kubenswrapper[4751]: E0130 21:38:08.624786 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf8hcchd4h5dh69h58ch644h55dh655h559h88hd6h5bfh688h89hf5h5f4h598h646h56bh658h544h5c4h84h55fh545h678hc7h56h657h5d7h55cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6x98w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(36866d1c-b1a0-4d3e-a87f-f5901b053bb5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:38:08 crc kubenswrapper[4751]: E0130 21:38:08.934600 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Jan 30 21:38:08 crc kubenswrapper[4751]: E0130 21:38:08.934750 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxctb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-npwgd_openstack(1051dd3c-5d30-47f1-8162-3a3e9d5ee271): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:38:08 crc kubenswrapper[4751]: E0130 21:38:08.935931 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-npwgd" podUID="1051dd3c-5d30-47f1-8162-3a3e9d5ee271" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.099688 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.180043 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rqmp\" (UniqueName: \"kubernetes.io/projected/37ac1bbe-c547-456d-8b0a-0c29a877775c-kube-api-access-9rqmp\") pod \"37ac1bbe-c547-456d-8b0a-0c29a877775c\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.180111 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-ovsdbserver-nb\") pod \"37ac1bbe-c547-456d-8b0a-0c29a877775c\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.180207 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-dns-svc\") pod \"37ac1bbe-c547-456d-8b0a-0c29a877775c\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.180395 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-config\") pod \"37ac1bbe-c547-456d-8b0a-0c29a877775c\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.180688 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-ovsdbserver-sb\") pod \"37ac1bbe-c547-456d-8b0a-0c29a877775c\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.180739 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-dns-swift-storage-0\") pod \"37ac1bbe-c547-456d-8b0a-0c29a877775c\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.200501 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37ac1bbe-c547-456d-8b0a-0c29a877775c-kube-api-access-9rqmp" (OuterVolumeSpecName: "kube-api-access-9rqmp") pod "37ac1bbe-c547-456d-8b0a-0c29a877775c" (UID: "37ac1bbe-c547-456d-8b0a-0c29a877775c"). InnerVolumeSpecName "kube-api-access-9rqmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.243567 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "37ac1bbe-c547-456d-8b0a-0c29a877775c" (UID: "37ac1bbe-c547-456d-8b0a-0c29a877775c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.251876 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "37ac1bbe-c547-456d-8b0a-0c29a877775c" (UID: "37ac1bbe-c547-456d-8b0a-0c29a877775c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.253637 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "37ac1bbe-c547-456d-8b0a-0c29a877775c" (UID: "37ac1bbe-c547-456d-8b0a-0c29a877775c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.260904 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-config" (OuterVolumeSpecName: "config") pod "37ac1bbe-c547-456d-8b0a-0c29a877775c" (UID: "37ac1bbe-c547-456d-8b0a-0c29a877775c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.278176 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "37ac1bbe-c547-456d-8b0a-0c29a877775c" (UID: "37ac1bbe-c547-456d-8b0a-0c29a877775c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.282763 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.282796 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.282806 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.282815 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.282827 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.282837 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rqmp\" (UniqueName: \"kubernetes.io/projected/37ac1bbe-c547-456d-8b0a-0c29a877775c-kube-api-access-9rqmp\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.936219 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" event={"ID":"37ac1bbe-c547-456d-8b0a-0c29a877775c","Type":"ContainerDied","Data":"6864e85d2504e6732a265d6ea2bacb5cab1c5dcba817c3a3b4ad3a6ad9332eef"} Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.936250 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:38:09 crc kubenswrapper[4751]: E0130 21:38:09.939584 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-npwgd" podUID="1051dd3c-5d30-47f1-8162-3a3e9d5ee271" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.997025 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-jnhv7"] Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.006819 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-jnhv7"] Jan 30 21:38:10 crc kubenswrapper[4751]: E0130 21:38:10.338077 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 30 21:38:10 crc kubenswrapper[4751]: E0130 21:38:10.338602 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4xqxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-bq6lp_openstack(564f3d8f-4b9f-4fe2-9464-baa31d6b7d24): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:38:10 crc kubenswrapper[4751]: E0130 21:38:10.339872 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-bq6lp" podUID="564f3d8f-4b9f-4fe2-9464-baa31d6b7d24" Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.375946 4751 scope.go:117] "RemoveContainer" containerID="8c7c23c935d0608587f83d483f2b18799c5f2ff038edce5ec3fb7b909b434972" Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.681319 4751 scope.go:117] "RemoveContainer" containerID="f55630451a66194662e90f8ccee31ad40c7dd8e161f684fe03ebc06b15f136e8" Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.812800 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lwm4t" Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.841362 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghgm7\" (UniqueName: \"kubernetes.io/projected/d42b4031-ca3e-4b28-b62a-eb346132dc3a-kube-api-access-ghgm7\") pod \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\" (UID: \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\") " Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.841404 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d42b4031-ca3e-4b28-b62a-eb346132dc3a-config\") pod \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\" (UID: \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\") " Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.841488 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42b4031-ca3e-4b28-b62a-eb346132dc3a-combined-ca-bundle\") pod \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\" (UID: \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\") " Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.868378 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d42b4031-ca3e-4b28-b62a-eb346132dc3a-kube-api-access-ghgm7" (OuterVolumeSpecName: "kube-api-access-ghgm7") pod "d42b4031-ca3e-4b28-b62a-eb346132dc3a" (UID: "d42b4031-ca3e-4b28-b62a-eb346132dc3a"). InnerVolumeSpecName "kube-api-access-ghgm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.913784 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42b4031-ca3e-4b28-b62a-eb346132dc3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d42b4031-ca3e-4b28-b62a-eb346132dc3a" (UID: "d42b4031-ca3e-4b28-b62a-eb346132dc3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.929994 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42b4031-ca3e-4b28-b62a-eb346132dc3a-config" (OuterVolumeSpecName: "config") pod "d42b4031-ca3e-4b28-b62a-eb346132dc3a" (UID: "d42b4031-ca3e-4b28-b62a-eb346132dc3a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.941277 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-586n4"] Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.943639 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d42b4031-ca3e-4b28-b62a-eb346132dc3a-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.943666 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghgm7\" (UniqueName: \"kubernetes.io/projected/d42b4031-ca3e-4b28-b62a-eb346132dc3a-kube-api-access-ghgm7\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.943679 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42b4031-ca3e-4b28-b62a-eb346132dc3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.967054 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lwm4t" event={"ID":"d42b4031-ca3e-4b28-b62a-eb346132dc3a","Type":"ContainerDied","Data":"8fffc2cd09b32d538d073fe7efb076cbe727bad01ceac9607f3b182ea08e3707"} Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.967105 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fffc2cd09b32d538d073fe7efb076cbe727bad01ceac9607f3b182ea08e3707" Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.967181 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lwm4t" Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.977594 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rt7v2" event={"ID":"a90f6a78-a996-49f8-a567-d2699c737d1f","Type":"ContainerStarted","Data":"b0651f2072f5243ce1ec548bf97964b55b91bb1b69f6154e95b941b6b4ae52c4"} Jan 30 21:38:10 crc kubenswrapper[4751]: E0130 21:38:10.979548 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-bq6lp" podUID="564f3d8f-4b9f-4fe2-9464-baa31d6b7d24" Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.994357 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-v9spg" podStartSLOduration=4.619046591 podStartE2EDuration="33.994339388s" podCreationTimestamp="2026-01-30 21:37:37 +0000 UTC" firstStartedPulling="2026-01-30 21:37:39.55350511 +0000 UTC m=+1398.299327759" lastFinishedPulling="2026-01-30 21:38:08.928797907 +0000 UTC m=+1427.674620556" observedRunningTime="2026-01-30 21:38:10.993849695 +0000 UTC m=+1429.739672344" watchObservedRunningTime="2026-01-30 21:38:10.994339388 +0000 UTC m=+1429.740162037" Jan 30 21:38:11 crc kubenswrapper[4751]: I0130 21:38:11.037816 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-rt7v2" podStartSLOduration=2.379304687 podStartE2EDuration="33.037797002s" podCreationTimestamp="2026-01-30 21:37:38 +0000 UTC" firstStartedPulling="2026-01-30 21:37:39.661956664 +0000 UTC m=+1398.407779313" lastFinishedPulling="2026-01-30 21:38:10.320448949 +0000 UTC m=+1429.066271628" observedRunningTime="2026-01-30 21:38:11.026796377 +0000 UTC m=+1429.772619036" watchObservedRunningTime="2026-01-30 21:38:11.037797002 +0000 UTC m=+1429.783619651" Jan 30 21:38:11 crc kubenswrapper[4751]: I0130 21:38:11.103839 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-89chj"] Jan 30 21:38:11 crc kubenswrapper[4751]: I0130 21:38:11.180651 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:38:11 crc kubenswrapper[4751]: W0130 21:38:11.242374 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4c714e3_2147_4f8a_97cd_2e62e0f3a955.slice/crio-e0581ee8873f5edaa39aaa7f2d0b784b8439d87e92a780f77d43f409a915965b WatchSource:0}: Error finding container e0581ee8873f5edaa39aaa7f2d0b784b8439d87e92a780f77d43f409a915965b: Status 404 returned error can't find the container with id e0581ee8873f5edaa39aaa7f2d0b784b8439d87e92a780f77d43f409a915965b Jan 30 21:38:11 crc kubenswrapper[4751]: W0130 21:38:11.245163 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf42767ff_b1d3_49e9_8b8d_39c65ea98978.slice/crio-e5a114a7a3f0e24e3bd57896ba3f86ef24a269e372ca019ce144df066c9e2be1 WatchSource:0}: Error finding container e5a114a7a3f0e24e3bd57896ba3f86ef24a269e372ca019ce144df066c9e2be1: Status 404 returned error can't find the container with id e5a114a7a3f0e24e3bd57896ba3f86ef24a269e372ca019ce144df066c9e2be1 Jan 30 21:38:11 crc kubenswrapper[4751]: W0130 21:38:11.246718 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58e79616_9b52_47f9_a43e_01cbd487fbbd.slice/crio-c29e1210fa3b7bf7fada16b3ee12edca743fdf4523588bf99fc7a7b6aa8b0f6d WatchSource:0}: Error finding container c29e1210fa3b7bf7fada16b3ee12edca743fdf4523588bf99fc7a7b6aa8b0f6d: Status 404 returned error can't find the container with id c29e1210fa3b7bf7fada16b3ee12edca743fdf4523588bf99fc7a7b6aa8b0f6d Jan 30 21:38:11 crc kubenswrapper[4751]: I0130 21:38:11.267108 4751 scope.go:117] "RemoveContainer" containerID="119884ecc859ddc20e43a24694f7a4c243d3d0650e6821ce6c5c66516d15e09a" Jan 30 21:38:11 crc kubenswrapper[4751]: I0130 21:38:11.317482 4751 scope.go:117] "RemoveContainer" containerID="5964e334ee213f037ca3d06aae948d2d9e897aa60cd6fd6594177910b8efb612" Jan 30 21:38:11 crc kubenswrapper[4751]: I0130 21:38:11.338929 4751 scope.go:117] "RemoveContainer" containerID="3fe25ad7467fd6a359800e1d2c4132e606e75ea363c0815021b0fb7427ca7b89" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.050031 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37ac1bbe-c547-456d-8b0a-0c29a877775c" path="/var/lib/kubelet/pods/37ac1bbe-c547-456d-8b0a-0c29a877775c/volumes" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.074907 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-89chj" event={"ID":"b4c714e3-2147-4f8a-97cd-2e62e0f3a955","Type":"ContainerStarted","Data":"1417d08c74e8789435dc5b0b0ef29190de93021ca824a689f4696bce6b1679a8"} Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.074947 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-89chj" event={"ID":"b4c714e3-2147-4f8a-97cd-2e62e0f3a955","Type":"ContainerStarted","Data":"e0581ee8873f5edaa39aaa7f2d0b784b8439d87e92a780f77d43f409a915965b"} Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.093225 4751 generic.go:334] "Generic (PLEG): container finished" podID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerID="e023642d7e8f9f5527a83bfc616f033c2d4851bd320c9d6b4ef572caee21ef7c" exitCode=0 Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.093338 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-586n4" event={"ID":"f42767ff-b1d3-49e9-8b8d-39c65ea98978","Type":"ContainerDied","Data":"e023642d7e8f9f5527a83bfc616f033c2d4851bd320c9d6b4ef572caee21ef7c"} Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.093372 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-586n4" event={"ID":"f42767ff-b1d3-49e9-8b8d-39c65ea98978","Type":"ContainerStarted","Data":"e5a114a7a3f0e24e3bd57896ba3f86ef24a269e372ca019ce144df066c9e2be1"} Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.120471 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v9spg" event={"ID":"8555e0d7-6d06-4edb-b463-86f7bf829949","Type":"ContainerStarted","Data":"2a0909f318a30556974662d8829ef78a359e73d89a596474535f309a8b496094"} Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.204300 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-trt9f"] Jan 30 21:38:12 crc kubenswrapper[4751]: E0130 21:38:12.204811 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37ac1bbe-c547-456d-8b0a-0c29a877775c" containerName="dnsmasq-dns" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.204824 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="37ac1bbe-c547-456d-8b0a-0c29a877775c" containerName="dnsmasq-dns" Jan 30 21:38:12 crc kubenswrapper[4751]: E0130 21:38:12.204836 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37ac1bbe-c547-456d-8b0a-0c29a877775c" containerName="init" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.204842 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="37ac1bbe-c547-456d-8b0a-0c29a877775c" containerName="init" Jan 30 21:38:12 crc kubenswrapper[4751]: E0130 21:38:12.204865 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d42b4031-ca3e-4b28-b62a-eb346132dc3a" containerName="neutron-db-sync" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.204873 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d42b4031-ca3e-4b28-b62a-eb346132dc3a" containerName="neutron-db-sync" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.205426 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="37ac1bbe-c547-456d-8b0a-0c29a877775c" containerName="dnsmasq-dns" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.205456 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d42b4031-ca3e-4b28-b62a-eb346132dc3a" containerName="neutron-db-sync" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.206755 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.217553 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"58e79616-9b52-47f9-a43e-01cbd487fbbd","Type":"ContainerStarted","Data":"c29e1210fa3b7bf7fada16b3ee12edca743fdf4523588bf99fc7a7b6aa8b0f6d"} Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.224663 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.243627 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36866d1c-b1a0-4d3e-a87f-f5901b053bb5","Type":"ContainerStarted","Data":"e8f97afcbe8ddffd29b942ae042e5fce17dc8861bb8e88d2fe1fb8cc9e3afcd5"} Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.286870 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-trt9f"] Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.291631 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g798q\" (UniqueName: \"kubernetes.io/projected/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-kube-api-access-g798q\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.293111 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.293271 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-dns-svc\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.297047 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-config\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.297229 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.297528 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.333388 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-566dccff6-ddvxf"] Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.335163 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.336938 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-89chj" podStartSLOduration=12.336919686 podStartE2EDuration="12.336919686s" podCreationTimestamp="2026-01-30 21:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:12.25231626 +0000 UTC m=+1430.998138909" watchObservedRunningTime="2026-01-30 21:38:12.336919686 +0000 UTC m=+1431.082742335" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.344866 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.345077 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-6hfgl" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.345179 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.345285 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.368892 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-566dccff6-ddvxf"] Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.435970 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g798q\" (UniqueName: \"kubernetes.io/projected/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-kube-api-access-g798q\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.450770 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.450994 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-dns-svc\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.451227 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-config\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.451388 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.451650 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.451949 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.452304 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-dns-svc\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.452554 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-config\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.452925 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.453429 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.528169 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g798q\" (UniqueName: \"kubernetes.io/projected/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-kube-api-access-g798q\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.563157 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-httpd-config\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.563211 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-combined-ca-bundle\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.563291 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-ovndb-tls-certs\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.563386 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29d9w\" (UniqueName: \"kubernetes.io/projected/11052d78-74b6-472a-aaba-513368f51ce3-kube-api-access-29d9w\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.563406 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-config\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.603680 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" podUID="37ac1bbe-c547-456d-8b0a-0c29a877775c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: i/o timeout" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.610289 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.664682 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-ovndb-tls-certs\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.664817 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29d9w\" (UniqueName: \"kubernetes.io/projected/11052d78-74b6-472a-aaba-513368f51ce3-kube-api-access-29d9w\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.664842 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-config\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.664902 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-httpd-config\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.664931 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-combined-ca-bundle\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.670345 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-config\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.675398 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-combined-ca-bundle\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.687517 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-httpd-config\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.689939 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-ovndb-tls-certs\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.696920 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29d9w\" (UniqueName: \"kubernetes.io/projected/11052d78-74b6-472a-aaba-513368f51ce3-kube-api-access-29d9w\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.778316 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:13 crc kubenswrapper[4751]: I0130 21:38:13.266100 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-trt9f"] Jan 30 21:38:13 crc kubenswrapper[4751]: I0130 21:38:13.315229 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"58e79616-9b52-47f9-a43e-01cbd487fbbd","Type":"ContainerStarted","Data":"63bb4deba3a7aa55abb8828c7f8386975555baf8db7e5316f55f82adf4041031"} Jan 30 21:38:13 crc kubenswrapper[4751]: I0130 21:38:13.319602 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"33588f5e-9224-4dd6-b689-0651c16d06bd","Type":"ContainerStarted","Data":"0e6fc40159796236c1d006a538830d10bc94cb3396f193843abf8cb478b98954"} Jan 30 21:38:13 crc kubenswrapper[4751]: I0130 21:38:13.319638 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"33588f5e-9224-4dd6-b689-0651c16d06bd","Type":"ContainerStarted","Data":"ddb9f9108d0450b2b505f7e37bbbf5c491b44e23c17e0903abd3c8bd376265b3"} Jan 30 21:38:13 crc kubenswrapper[4751]: I0130 21:38:13.733116 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-566dccff6-ddvxf"] Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.330978 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-586n4" event={"ID":"f42767ff-b1d3-49e9-8b8d-39c65ea98978","Type":"ContainerStarted","Data":"25b90bc3624912ca065d76095b1602d95f2fe189d80a97579243f563f8a8fa45"} Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.332679 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-566dccff6-ddvxf" event={"ID":"11052d78-74b6-472a-aaba-513368f51ce3","Type":"ContainerStarted","Data":"2ba96d5744b69d3f9276be5b0e9715862e0020d80034d236fcecc5d9420b54cc"} Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.332700 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-566dccff6-ddvxf" event={"ID":"11052d78-74b6-472a-aaba-513368f51ce3","Type":"ContainerStarted","Data":"bff81fd2907d366a655d26ebdf3a255c3bffa93ae91269d7fa674f369fb98f34"} Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.333897 4751 generic.go:334] "Generic (PLEG): container finished" podID="5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" containerID="a53508f839991fbdcbf4f267b810010ea1fe74ec4a4881adda9c8e4964af9678" exitCode=0 Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.333935 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-trt9f" event={"ID":"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de","Type":"ContainerDied","Data":"a53508f839991fbdcbf4f267b810010ea1fe74ec4a4881adda9c8e4964af9678"} Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.333950 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-trt9f" event={"ID":"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de","Type":"ContainerStarted","Data":"076d7d4d931c2808d030541aca7f47cf2bc19d9d8238d02afb1c64b4babe3a92"} Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.336435 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"58e79616-9b52-47f9-a43e-01cbd487fbbd","Type":"ContainerStarted","Data":"38d6ea6d17555bee86d24d0120b47bfe85898a3b92da2d1783b64c466b54936d"} Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.432899 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=15.432881581 podStartE2EDuration="15.432881581s" podCreationTimestamp="2026-01-30 21:37:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:14.42796285 +0000 UTC m=+1433.173785499" watchObservedRunningTime="2026-01-30 21:38:14.432881581 +0000 UTC m=+1433.178704230" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.735717 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5db486d6f7-9jq9s"] Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.739260 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.743675 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.744346 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.828447 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5db486d6f7-9jq9s"] Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.872531 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-internal-tls-certs\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.872578 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-httpd-config\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.872620 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-ovndb-tls-certs\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.872697 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-combined-ca-bundle\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.872730 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-config\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.872785 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-public-tls-certs\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.872803 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz77t\" (UniqueName: \"kubernetes.io/projected/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-kube-api-access-cz77t\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.980829 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-public-tls-certs\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.980880 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz77t\" (UniqueName: \"kubernetes.io/projected/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-kube-api-access-cz77t\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.980936 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-internal-tls-certs\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.980959 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-httpd-config\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.981010 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-ovndb-tls-certs\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.981113 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-combined-ca-bundle\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.981152 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-config\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.990674 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-internal-tls-certs\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.993166 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-config\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:15 crc kubenswrapper[4751]: I0130 21:38:15.010317 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-httpd-config\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:15 crc kubenswrapper[4751]: I0130 21:38:15.010480 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-ovndb-tls-certs\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:15 crc kubenswrapper[4751]: I0130 21:38:15.010808 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-public-tls-certs\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:15 crc kubenswrapper[4751]: I0130 21:38:15.011073 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-combined-ca-bundle\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:15 crc kubenswrapper[4751]: I0130 21:38:15.014034 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz77t\" (UniqueName: \"kubernetes.io/projected/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-kube-api-access-cz77t\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:15 crc kubenswrapper[4751]: I0130 21:38:15.068807 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:15 crc kubenswrapper[4751]: I0130 21:38:15.348952 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"33588f5e-9224-4dd6-b689-0651c16d06bd","Type":"ContainerStarted","Data":"404ff17c9262956b5de69cff0c330fcf3cee139543dbe993153748a7e4076c5f"} Jan 30 21:38:18 crc kubenswrapper[4751]: I0130 21:38:18.403571 4751 generic.go:334] "Generic (PLEG): container finished" podID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerID="25b90bc3624912ca065d76095b1602d95f2fe189d80a97579243f563f8a8fa45" exitCode=0 Jan 30 21:38:18 crc kubenswrapper[4751]: I0130 21:38:18.403646 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-586n4" event={"ID":"f42767ff-b1d3-49e9-8b8d-39c65ea98978","Type":"ContainerDied","Data":"25b90bc3624912ca065d76095b1602d95f2fe189d80a97579243f563f8a8fa45"} Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.437792 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-trt9f" event={"ID":"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de","Type":"ContainerStarted","Data":"e4a0adf6c315c14550be616cb96a9ec23a37d406c363546310b316d3fbfb0236"} Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.438645 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.461935 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-566dccff6-ddvxf" event={"ID":"11052d78-74b6-472a-aaba-513368f51ce3","Type":"ContainerStarted","Data":"74aefce86656a68e812b38f7658b2359076b62078ffd0f3974807d58363f94b0"} Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.462107 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.472164 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-trt9f" podStartSLOduration=8.472140959 podStartE2EDuration="8.472140959s" podCreationTimestamp="2026-01-30 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:20.46769866 +0000 UTC m=+1439.213521389" watchObservedRunningTime="2026-01-30 21:38:20.472140959 +0000 UTC m=+1439.217963618" Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.516503 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=21.516478816 podStartE2EDuration="21.516478816s" podCreationTimestamp="2026-01-30 21:37:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:20.502290636 +0000 UTC m=+1439.248113285" watchObservedRunningTime="2026-01-30 21:38:20.516478816 +0000 UTC m=+1439.262301495" Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.556099 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-566dccff6-ddvxf" podStartSLOduration=8.556073347 podStartE2EDuration="8.556073347s" podCreationTimestamp="2026-01-30 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:20.53864238 +0000 UTC m=+1439.284465029" watchObservedRunningTime="2026-01-30 21:38:20.556073347 +0000 UTC m=+1439.301896016" Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.587594 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.587657 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.604101 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.604145 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.635980 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.649143 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.649732 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.656536 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 21:38:21 crc kubenswrapper[4751]: I0130 21:38:21.473513 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 21:38:21 crc kubenswrapper[4751]: I0130 21:38:21.474219 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 21:38:21 crc kubenswrapper[4751]: I0130 21:38:21.474244 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 21:38:21 crc kubenswrapper[4751]: I0130 21:38:21.474550 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 21:38:22 crc kubenswrapper[4751]: I0130 21:38:22.480917 4751 generic.go:334] "Generic (PLEG): container finished" podID="8555e0d7-6d06-4edb-b463-86f7bf829949" containerID="2a0909f318a30556974662d8829ef78a359e73d89a596474535f309a8b496094" exitCode=0 Jan 30 21:38:22 crc kubenswrapper[4751]: I0130 21:38:22.481587 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v9spg" event={"ID":"8555e0d7-6d06-4edb-b463-86f7bf829949","Type":"ContainerDied","Data":"2a0909f318a30556974662d8829ef78a359e73d89a596474535f309a8b496094"} Jan 30 21:38:23 crc kubenswrapper[4751]: I0130 21:38:23.494951 4751 generic.go:334] "Generic (PLEG): container finished" podID="a90f6a78-a996-49f8-a567-d2699c737d1f" containerID="b0651f2072f5243ce1ec548bf97964b55b91bb1b69f6154e95b941b6b4ae52c4" exitCode=0 Jan 30 21:38:23 crc kubenswrapper[4751]: I0130 21:38:23.495259 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rt7v2" event={"ID":"a90f6a78-a996-49f8-a567-d2699c737d1f","Type":"ContainerDied","Data":"b0651f2072f5243ce1ec548bf97964b55b91bb1b69f6154e95b941b6b4ae52c4"} Jan 30 21:38:23 crc kubenswrapper[4751]: I0130 21:38:23.498936 4751 generic.go:334] "Generic (PLEG): container finished" podID="b4c714e3-2147-4f8a-97cd-2e62e0f3a955" containerID="1417d08c74e8789435dc5b0b0ef29190de93021ca824a689f4696bce6b1679a8" exitCode=0 Jan 30 21:38:23 crc kubenswrapper[4751]: I0130 21:38:23.499002 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:38:23 crc kubenswrapper[4751]: I0130 21:38:23.499011 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:38:23 crc kubenswrapper[4751]: I0130 21:38:23.499321 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:38:23 crc kubenswrapper[4751]: I0130 21:38:23.499546 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:38:23 crc kubenswrapper[4751]: I0130 21:38:23.499409 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-89chj" event={"ID":"b4c714e3-2147-4f8a-97cd-2e62e0f3a955","Type":"ContainerDied","Data":"1417d08c74e8789435dc5b0b0ef29190de93021ca824a689f4696bce6b1679a8"} Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.091472 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v9spg" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.126890 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.127211 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.227922 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb4qn\" (UniqueName: \"kubernetes.io/projected/8555e0d7-6d06-4edb-b463-86f7bf829949-kube-api-access-wb4qn\") pod \"8555e0d7-6d06-4edb-b463-86f7bf829949\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.227976 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8555e0d7-6d06-4edb-b463-86f7bf829949-logs\") pod \"8555e0d7-6d06-4edb-b463-86f7bf829949\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.228078 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-combined-ca-bundle\") pod \"8555e0d7-6d06-4edb-b463-86f7bf829949\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.228150 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-scripts\") pod \"8555e0d7-6d06-4edb-b463-86f7bf829949\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.228289 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-config-data\") pod \"8555e0d7-6d06-4edb-b463-86f7bf829949\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.228653 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8555e0d7-6d06-4edb-b463-86f7bf829949-logs" (OuterVolumeSpecName: "logs") pod "8555e0d7-6d06-4edb-b463-86f7bf829949" (UID: "8555e0d7-6d06-4edb-b463-86f7bf829949"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.229205 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8555e0d7-6d06-4edb-b463-86f7bf829949-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.233437 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8555e0d7-6d06-4edb-b463-86f7bf829949-kube-api-access-wb4qn" (OuterVolumeSpecName: "kube-api-access-wb4qn") pod "8555e0d7-6d06-4edb-b463-86f7bf829949" (UID: "8555e0d7-6d06-4edb-b463-86f7bf829949"). InnerVolumeSpecName "kube-api-access-wb4qn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.247458 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-scripts" (OuterVolumeSpecName: "scripts") pod "8555e0d7-6d06-4edb-b463-86f7bf829949" (UID: "8555e0d7-6d06-4edb-b463-86f7bf829949"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.270511 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-config-data" (OuterVolumeSpecName: "config-data") pod "8555e0d7-6d06-4edb-b463-86f7bf829949" (UID: "8555e0d7-6d06-4edb-b463-86f7bf829949"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.270525 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8555e0d7-6d06-4edb-b463-86f7bf829949" (UID: "8555e0d7-6d06-4edb-b463-86f7bf829949"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.337624 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb4qn\" (UniqueName: \"kubernetes.io/projected/8555e0d7-6d06-4edb-b463-86f7bf829949-kube-api-access-wb4qn\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.337660 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.337677 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.337686 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.447572 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.478900 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.506987 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5db486d6f7-9jq9s"] Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.510073 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 21:38:24 crc kubenswrapper[4751]: W0130 21:38:24.511557 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b32add6_b9f7_4e57_9dc8_ea71dbc40276.slice/crio-830f7a58fa0eff11fb7741f803681a1e2d985e3d4ba5a29223b173fc3c4a8925 WatchSource:0}: Error finding container 830f7a58fa0eff11fb7741f803681a1e2d985e3d4ba5a29223b173fc3c4a8925: Status 404 returned error can't find the container with id 830f7a58fa0eff11fb7741f803681a1e2d985e3d4ba5a29223b173fc3c4a8925 Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.512151 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36866d1c-b1a0-4d3e-a87f-f5901b053bb5","Type":"ContainerStarted","Data":"3927c7e8f046b132aa5530dcb2faa3362f1bfb4c014417fce75dde0984b299c0"} Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.519846 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-586n4" event={"ID":"f42767ff-b1d3-49e9-8b8d-39c65ea98978","Type":"ContainerStarted","Data":"21ac36001cb714817d8ab855578743e4c5c5ddbfabc891012d0c87386994da9f"} Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.524533 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v9spg" event={"ID":"8555e0d7-6d06-4edb-b463-86f7bf829949","Type":"ContainerDied","Data":"7502198a116b3f2771f1ab3c57c8008044d28b3101423ffa202d372d5ac52b80"} Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.524551 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v9spg" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.524565 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7502198a116b3f2771f1ab3c57c8008044d28b3101423ffa202d372d5ac52b80" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.528591 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.529522 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-npwgd" event={"ID":"1051dd3c-5d30-47f1-8162-3a3e9d5ee271","Type":"ContainerStarted","Data":"5e15084dd70b42693552f9d64b22474ea93dd026e14a53bb39cd74bd8ba86b97"} Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.603005 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-npwgd" podStartSLOduration=2.6926212449999998 podStartE2EDuration="47.602984015s" podCreationTimestamp="2026-01-30 21:37:37 +0000 UTC" firstStartedPulling="2026-01-30 21:37:39.156456604 +0000 UTC m=+1397.902279243" lastFinishedPulling="2026-01-30 21:38:24.066819364 +0000 UTC m=+1442.812642013" observedRunningTime="2026-01-30 21:38:24.565383807 +0000 UTC m=+1443.311206456" watchObservedRunningTime="2026-01-30 21:38:24.602984015 +0000 UTC m=+1443.348806664" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.662117 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-586n4" podStartSLOduration=17.688579625 podStartE2EDuration="29.662100178s" podCreationTimestamp="2026-01-30 21:37:55 +0000 UTC" firstStartedPulling="2026-01-30 21:38:12.095149871 +0000 UTC m=+1430.840972520" lastFinishedPulling="2026-01-30 21:38:24.068670424 +0000 UTC m=+1442.814493073" observedRunningTime="2026-01-30 21:38:24.622482127 +0000 UTC m=+1443.368304776" watchObservedRunningTime="2026-01-30 21:38:24.662100178 +0000 UTC m=+1443.407922827" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.728638 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5486cc9958-dvfn2"] Jan 30 21:38:24 crc kubenswrapper[4751]: E0130 21:38:24.729132 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8555e0d7-6d06-4edb-b463-86f7bf829949" containerName="placement-db-sync" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.729148 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8555e0d7-6d06-4edb-b463-86f7bf829949" containerName="placement-db-sync" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.729372 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8555e0d7-6d06-4edb-b463-86f7bf829949" containerName="placement-db-sync" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.731417 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.733180 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.734355 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-8smxc" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.747146 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.747202 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.747342 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.749879 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5486cc9958-dvfn2"] Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.853278 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-internal-tls-certs\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.853340 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-scripts\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.853505 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-public-tls-certs\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.853590 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhxpw\" (UniqueName: \"kubernetes.io/projected/5089359d-290c-4b07-80e4-0c4c73ffa8cd-kube-api-access-hhxpw\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.853772 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-config-data\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.853840 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5089359d-290c-4b07-80e4-0c4c73ffa8cd-logs\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.853909 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-combined-ca-bundle\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.957118 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-scripts\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.957437 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-public-tls-certs\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.957476 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhxpw\" (UniqueName: \"kubernetes.io/projected/5089359d-290c-4b07-80e4-0c4c73ffa8cd-kube-api-access-hhxpw\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.957549 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-config-data\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.957576 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5089359d-290c-4b07-80e4-0c4c73ffa8cd-logs\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.957607 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-combined-ca-bundle\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.957728 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-internal-tls-certs\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.960915 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5089359d-290c-4b07-80e4-0c4c73ffa8cd-logs\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.962873 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-scripts\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.963994 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-combined-ca-bundle\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.976919 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-public-tls-certs\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.979940 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-config-data\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.981666 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-internal-tls-certs\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.981847 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhxpw\" (UniqueName: \"kubernetes.io/projected/5089359d-290c-4b07-80e4-0c4c73ffa8cd-kube-api-access-hhxpw\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.062889 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.254964 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rt7v2" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.272821 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.371055 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-fernet-keys\") pod \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.371165 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-combined-ca-bundle\") pod \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.371229 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tbq8\" (UniqueName: \"kubernetes.io/projected/a90f6a78-a996-49f8-a567-d2699c737d1f-kube-api-access-6tbq8\") pod \"a90f6a78-a996-49f8-a567-d2699c737d1f\" (UID: \"a90f6a78-a996-49f8-a567-d2699c737d1f\") " Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.371268 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a90f6a78-a996-49f8-a567-d2699c737d1f-combined-ca-bundle\") pod \"a90f6a78-a996-49f8-a567-d2699c737d1f\" (UID: \"a90f6a78-a996-49f8-a567-d2699c737d1f\") " Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.371369 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-credential-keys\") pod \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.371396 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-config-data\") pod \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.371421 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a90f6a78-a996-49f8-a567-d2699c737d1f-db-sync-config-data\") pod \"a90f6a78-a996-49f8-a567-d2699c737d1f\" (UID: \"a90f6a78-a996-49f8-a567-d2699c737d1f\") " Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.371455 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-scripts\") pod \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.371503 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqlp9\" (UniqueName: \"kubernetes.io/projected/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-kube-api-access-vqlp9\") pod \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.376257 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b4c714e3-2147-4f8a-97cd-2e62e0f3a955" (UID: "b4c714e3-2147-4f8a-97cd-2e62e0f3a955"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.377232 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b4c714e3-2147-4f8a-97cd-2e62e0f3a955" (UID: "b4c714e3-2147-4f8a-97cd-2e62e0f3a955"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.378476 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-kube-api-access-vqlp9" (OuterVolumeSpecName: "kube-api-access-vqlp9") pod "b4c714e3-2147-4f8a-97cd-2e62e0f3a955" (UID: "b4c714e3-2147-4f8a-97cd-2e62e0f3a955"). InnerVolumeSpecName "kube-api-access-vqlp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.385024 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a90f6a78-a996-49f8-a567-d2699c737d1f-kube-api-access-6tbq8" (OuterVolumeSpecName: "kube-api-access-6tbq8") pod "a90f6a78-a996-49f8-a567-d2699c737d1f" (UID: "a90f6a78-a996-49f8-a567-d2699c737d1f"). InnerVolumeSpecName "kube-api-access-6tbq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.385455 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-scripts" (OuterVolumeSpecName: "scripts") pod "b4c714e3-2147-4f8a-97cd-2e62e0f3a955" (UID: "b4c714e3-2147-4f8a-97cd-2e62e0f3a955"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.385481 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a90f6a78-a996-49f8-a567-d2699c737d1f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a90f6a78-a996-49f8-a567-d2699c737d1f" (UID: "a90f6a78-a996-49f8-a567-d2699c737d1f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.411466 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a90f6a78-a996-49f8-a567-d2699c737d1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a90f6a78-a996-49f8-a567-d2699c737d1f" (UID: "a90f6a78-a996-49f8-a567-d2699c737d1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.413440 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-config-data" (OuterVolumeSpecName: "config-data") pod "b4c714e3-2147-4f8a-97cd-2e62e0f3a955" (UID: "b4c714e3-2147-4f8a-97cd-2e62e0f3a955"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.441411 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4c714e3-2147-4f8a-97cd-2e62e0f3a955" (UID: "b4c714e3-2147-4f8a-97cd-2e62e0f3a955"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.474310 4751 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.474390 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.474399 4751 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a90f6a78-a996-49f8-a567-d2699c737d1f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.474409 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.474418 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqlp9\" (UniqueName: \"kubernetes.io/projected/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-kube-api-access-vqlp9\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.474428 4751 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.474438 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.474448 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tbq8\" (UniqueName: \"kubernetes.io/projected/a90f6a78-a996-49f8-a567-d2699c737d1f-kube-api-access-6tbq8\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.474456 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a90f6a78-a996-49f8-a567-d2699c737d1f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.539066 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rt7v2" event={"ID":"a90f6a78-a996-49f8-a567-d2699c737d1f","Type":"ContainerDied","Data":"0da3c4e319786c371f66bda236269e4c334ffccdd92bab472a1f2cb2958a901e"} Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.539120 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0da3c4e319786c371f66bda236269e4c334ffccdd92bab472a1f2cb2958a901e" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.539084 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rt7v2" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.541180 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5db486d6f7-9jq9s" event={"ID":"5b32add6-b9f7-4e57-9dc8-ea71dbc40276","Type":"ContainerStarted","Data":"43f0c7937886815d9f7975ac5a567ad9805f039d1f79d20965dca1643fdccbec"} Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.541233 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5db486d6f7-9jq9s" event={"ID":"5b32add6-b9f7-4e57-9dc8-ea71dbc40276","Type":"ContainerStarted","Data":"ffe40f5beac55335ed5a0e5ca3f2b87505ff8c1d5062b9daa9f817649c1fad14"} Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.541244 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5db486d6f7-9jq9s" event={"ID":"5b32add6-b9f7-4e57-9dc8-ea71dbc40276","Type":"ContainerStarted","Data":"830f7a58fa0eff11fb7741f803681a1e2d985e3d4ba5a29223b173fc3c4a8925"} Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.541312 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.542607 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-89chj" event={"ID":"b4c714e3-2147-4f8a-97cd-2e62e0f3a955","Type":"ContainerDied","Data":"e0581ee8873f5edaa39aaa7f2d0b784b8439d87e92a780f77d43f409a915965b"} Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.542621 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.542641 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0581ee8873f5edaa39aaa7f2d0b784b8439d87e92a780f77d43f409a915965b" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.597569 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5db486d6f7-9jq9s" podStartSLOduration=11.597549992 podStartE2EDuration="11.597549992s" podCreationTimestamp="2026-01-30 21:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:25.56088074 +0000 UTC m=+1444.306703389" watchObservedRunningTime="2026-01-30 21:38:25.597549992 +0000 UTC m=+1444.343372641" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.633471 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5486cc9958-dvfn2"] Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.753414 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-55986d9fc9-zjsx4"] Jan 30 21:38:25 crc kubenswrapper[4751]: E0130 21:38:25.753905 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c714e3-2147-4f8a-97cd-2e62e0f3a955" containerName="keystone-bootstrap" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.753918 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c714e3-2147-4f8a-97cd-2e62e0f3a955" containerName="keystone-bootstrap" Jan 30 21:38:25 crc kubenswrapper[4751]: E0130 21:38:25.753947 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a90f6a78-a996-49f8-a567-d2699c737d1f" containerName="barbican-db-sync" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.753953 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a90f6a78-a996-49f8-a567-d2699c737d1f" containerName="barbican-db-sync" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.754145 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4c714e3-2147-4f8a-97cd-2e62e0f3a955" containerName="keystone-bootstrap" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.754167 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a90f6a78-a996-49f8-a567-d2699c737d1f" containerName="barbican-db-sync" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.754890 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.773025 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.773409 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6zjrt" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.773608 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.774290 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.774614 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.774853 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.787154 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-55986d9fc9-zjsx4"] Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.880243 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-599c9789d8-7n2xt"] Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.887934 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-public-tls-certs\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.888915 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-fernet-keys\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.889449 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-internal-tls-certs\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.889606 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-config-data\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.889731 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj5wz\" (UniqueName: \"kubernetes.io/projected/aab674da-e1ff-4881-9432-fad6b85111f2-kube-api-access-qj5wz\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.891248 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-combined-ca-bundle\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.891417 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-scripts\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.891583 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-credential-keys\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.893211 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.897065 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.899819 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.900811 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-rrrpx" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.937407 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-599c9789d8-7n2xt"] Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.994528 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-fernet-keys\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.994585 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a46895c-c496-4a55-b580-37e5118d467e-logs\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.994610 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-config-data-custom\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.994640 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-internal-tls-certs\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.994668 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-config-data\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.994699 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj5wz\" (UniqueName: \"kubernetes.io/projected/aab674da-e1ff-4881-9432-fad6b85111f2-kube-api-access-qj5wz\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.994759 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-config-data\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.994789 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-combined-ca-bundle\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.994819 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-scripts\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.994850 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-combined-ca-bundle\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.994877 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-credential-keys\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.994925 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg9wq\" (UniqueName: \"kubernetes.io/projected/7a46895c-c496-4a55-b580-37e5118d467e-kube-api-access-sg9wq\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.994943 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-public-tls-certs\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.011100 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-credential-keys\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.018868 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-scripts\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.019785 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-public-tls-certs\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.030919 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-combined-ca-bundle\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.034070 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-internal-tls-certs\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.052050 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-config-data\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.059149 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-fernet-keys\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.078132 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj5wz\" (UniqueName: \"kubernetes.io/projected/aab674da-e1ff-4881-9432-fad6b85111f2-kube-api-access-qj5wz\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.096567 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-config-data\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.096864 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-combined-ca-bundle\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.097056 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg9wq\" (UniqueName: \"kubernetes.io/projected/7a46895c-c496-4a55-b580-37e5118d467e-kube-api-access-sg9wq\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.097225 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a46895c-c496-4a55-b580-37e5118d467e-logs\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.097319 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-config-data-custom\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.102165 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a46895c-c496-4a55-b580-37e5118d467e-logs\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.117193 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-config-data-custom\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.120022 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-combined-ca-bundle\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.121680 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-config-data\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.124817 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-684d7cc675-gfk2w"] Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.126488 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.130775 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.137420 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg9wq\" (UniqueName: \"kubernetes.io/projected/7a46895c-c496-4a55-b580-37e5118d467e-kube-api-access-sg9wq\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.137726 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.164367 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-684d7cc675-gfk2w"] Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.204187 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97e3362c-da50-4989-abe0-9dde0694c635-logs\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.204319 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-combined-ca-bundle\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.204386 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-config-data-custom\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.204512 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-config-data\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.204559 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88fh5\" (UniqueName: \"kubernetes.io/projected/97e3362c-da50-4989-abe0-9dde0694c635-kube-api-access-88fh5\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.224846 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-trt9f"] Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.225128 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-trt9f" podUID="5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" containerName="dnsmasq-dns" containerID="cri-o://e4a0adf6c315c14550be616cb96a9ec23a37d406c363546310b316d3fbfb0236" gracePeriod=10 Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.228811 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.268093 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-j4xm6"] Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.272705 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.277371 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.296187 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-j4xm6"] Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.310174 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-config\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.310228 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-combined-ca-bundle\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.310268 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-config-data-custom\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.310310 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.310440 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-config-data\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.310472 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.310499 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88fh5\" (UniqueName: \"kubernetes.io/projected/97e3362c-da50-4989-abe0-9dde0694c635-kube-api-access-88fh5\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.310550 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddsbn\" (UniqueName: \"kubernetes.io/projected/e8978647-a7c1-4e25-b9c9-114227c06b39-kube-api-access-ddsbn\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.310651 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.310676 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.310726 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97e3362c-da50-4989-abe0-9dde0694c635-logs\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.311479 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97e3362c-da50-4989-abe0-9dde0694c635-logs\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.315470 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-56b859c9db-tvldd"] Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.320496 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.322074 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-combined-ca-bundle\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.335084 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5fd66f57b7-5jqls"] Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.337133 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.340854 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-config-data\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.344092 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88fh5\" (UniqueName: \"kubernetes.io/projected/97e3362c-da50-4989-abe0-9dde0694c635-kube-api-access-88fh5\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.349562 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-config-data-custom\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.354968 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.355007 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.372282 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-56b859c9db-tvldd"] Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.374448 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.390626 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5fd66f57b7-5jqls"] Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.410188 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-577c4d4496-28rjx"] Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.412101 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76562ec1-fb40-4590-9d96-f05cafc13640-config-data-custom\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.412407 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-config\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.412520 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76562ec1-fb40-4590-9d96-f05cafc13640-logs\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.412628 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/334843b7-3c66-42fa-8880-4337946df593-config-data\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.412703 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.412773 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzxmt\" (UniqueName: \"kubernetes.io/projected/334843b7-3c66-42fa-8880-4337946df593-kube-api-access-zzxmt\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.412847 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22h84\" (UniqueName: \"kubernetes.io/projected/76562ec1-fb40-4590-9d96-f05cafc13640-kube-api-access-22h84\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.412939 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76562ec1-fb40-4590-9d96-f05cafc13640-config-data\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.413052 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/334843b7-3c66-42fa-8880-4337946df593-combined-ca-bundle\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.413148 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76562ec1-fb40-4590-9d96-f05cafc13640-combined-ca-bundle\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.413223 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.413347 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddsbn\" (UniqueName: \"kubernetes.io/projected/e8978647-a7c1-4e25-b9c9-114227c06b39-kube-api-access-ddsbn\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.413454 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/334843b7-3c66-42fa-8880-4337946df593-logs\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.413554 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/334843b7-3c66-42fa-8880-4337946df593-config-data-custom\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.413669 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.413738 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.414723 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.415740 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.418044 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.418757 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.419275 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.420778 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-config\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.425903 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.440975 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-577c4d4496-28rjx"] Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.443429 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddsbn\" (UniqueName: \"kubernetes.io/projected/e8978647-a7c1-4e25-b9c9-114227c06b39-kube-api-access-ddsbn\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.516217 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76562ec1-fb40-4590-9d96-f05cafc13640-config-data\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.516268 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-config-data\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.516400 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49d33f4c-f33a-445b-90ab-795e750ecf2a-logs\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.516428 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/334843b7-3c66-42fa-8880-4337946df593-combined-ca-bundle\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.516488 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76562ec1-fb40-4590-9d96-f05cafc13640-combined-ca-bundle\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.516584 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/334843b7-3c66-42fa-8880-4337946df593-logs\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.516642 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/334843b7-3c66-42fa-8880-4337946df593-config-data-custom\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.516666 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-config-data-custom\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.516791 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76562ec1-fb40-4590-9d96-f05cafc13640-config-data-custom\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.516875 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtm4b\" (UniqueName: \"kubernetes.io/projected/49d33f4c-f33a-445b-90ab-795e750ecf2a-kube-api-access-mtm4b\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.516894 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76562ec1-fb40-4590-9d96-f05cafc13640-logs\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.516937 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-combined-ca-bundle\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.516969 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/334843b7-3c66-42fa-8880-4337946df593-config-data\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.517028 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzxmt\" (UniqueName: \"kubernetes.io/projected/334843b7-3c66-42fa-8880-4337946df593-kube-api-access-zzxmt\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.517058 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22h84\" (UniqueName: \"kubernetes.io/projected/76562ec1-fb40-4590-9d96-f05cafc13640-kube-api-access-22h84\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.523172 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76562ec1-fb40-4590-9d96-f05cafc13640-logs\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.530046 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76562ec1-fb40-4590-9d96-f05cafc13640-config-data-custom\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.530868 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/334843b7-3c66-42fa-8880-4337946df593-combined-ca-bundle\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.532192 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76562ec1-fb40-4590-9d96-f05cafc13640-config-data\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.532978 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/334843b7-3c66-42fa-8880-4337946df593-logs\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.539153 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/334843b7-3c66-42fa-8880-4337946df593-config-data-custom\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.541974 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76562ec1-fb40-4590-9d96-f05cafc13640-combined-ca-bundle\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.548723 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/334843b7-3c66-42fa-8880-4337946df593-config-data\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.553031 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22h84\" (UniqueName: \"kubernetes.io/projected/76562ec1-fb40-4590-9d96-f05cafc13640-kube-api-access-22h84\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.574921 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzxmt\" (UniqueName: \"kubernetes.io/projected/334843b7-3c66-42fa-8880-4337946df593-kube-api-access-zzxmt\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.608264 4751 generic.go:334] "Generic (PLEG): container finished" podID="5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" containerID="e4a0adf6c315c14550be616cb96a9ec23a37d406c363546310b316d3fbfb0236" exitCode=0 Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.608418 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-trt9f" event={"ID":"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de","Type":"ContainerDied","Data":"e4a0adf6c315c14550be616cb96a9ec23a37d406c363546310b316d3fbfb0236"} Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.610788 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5486cc9958-dvfn2" event={"ID":"5089359d-290c-4b07-80e4-0c4c73ffa8cd","Type":"ContainerStarted","Data":"5f448114a9e068f8f50004034c9e2ada11f4f45525f6ffe5d1ccd7a3167a8e23"} Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.610851 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5486cc9958-dvfn2" event={"ID":"5089359d-290c-4b07-80e4-0c4c73ffa8cd","Type":"ContainerStarted","Data":"dd3c757afd0458b9ccac3e6359d964949a2b5e06b72b283eb20687517536ba8e"} Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.619145 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-config-data-custom\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.619288 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtm4b\" (UniqueName: \"kubernetes.io/projected/49d33f4c-f33a-445b-90ab-795e750ecf2a-kube-api-access-mtm4b\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.619347 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-combined-ca-bundle\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.619429 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-config-data\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.619461 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49d33f4c-f33a-445b-90ab-795e750ecf2a-logs\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.619899 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49d33f4c-f33a-445b-90ab-795e750ecf2a-logs\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.625574 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-config-data-custom\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.626312 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-combined-ca-bundle\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.632391 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-config-data\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.649532 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtm4b\" (UniqueName: \"kubernetes.io/projected/49d33f4c-f33a-445b-90ab-795e750ecf2a-kube-api-access-mtm4b\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.699666 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.733728 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.742740 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.762918 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.082174 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-55986d9fc9-zjsx4"] Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.468914 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-586n4" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="registry-server" probeResult="failure" output=< Jan 30 21:38:27 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:38:27 crc kubenswrapper[4751]: > Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.575959 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.607487 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-684d7cc675-gfk2w"] Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.653507 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-ovsdbserver-nb\") pod \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.653622 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-dns-swift-storage-0\") pod \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.653667 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-ovsdbserver-sb\") pod \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.653765 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-config\") pod \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.653821 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-dns-svc\") pod \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.653871 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g798q\" (UniqueName: \"kubernetes.io/projected/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-kube-api-access-g798q\") pod \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.747816 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-599c9789d8-7n2xt"] Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.749542 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-kube-api-access-g798q" (OuterVolumeSpecName: "kube-api-access-g798q") pod "5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" (UID: "5f34e0f2-89e9-44d7-8c8e-3ee12728b7de"). InnerVolumeSpecName "kube-api-access-g798q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.806500 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" (UID: "5f34e0f2-89e9-44d7-8c8e-3ee12728b7de"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.825853 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" (UID: "5f34e0f2-89e9-44d7-8c8e-3ee12728b7de"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.847741 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.847774 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g798q\" (UniqueName: \"kubernetes.io/projected/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-kube-api-access-g798q\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.847787 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.897912 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-trt9f" event={"ID":"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de","Type":"ContainerDied","Data":"076d7d4d931c2808d030541aca7f47cf2bc19d9d8238d02afb1c64b4babe3a92"} Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.897960 4751 scope.go:117] "RemoveContainer" containerID="e4a0adf6c315c14550be616cb96a9ec23a37d406c363546310b316d3fbfb0236" Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.898093 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.954587 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-55986d9fc9-zjsx4" event={"ID":"aab674da-e1ff-4881-9432-fad6b85111f2","Type":"ContainerStarted","Data":"5951c7e9f19be42ae6b12e24068ae94da55d168e37b756f99490f10534e3236f"} Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.013360 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-config" (OuterVolumeSpecName: "config") pod "5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" (UID: "5f34e0f2-89e9-44d7-8c8e-3ee12728b7de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.038884 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5486cc9958-dvfn2" podStartSLOduration=4.038867837 podStartE2EDuration="4.038867837s" podCreationTimestamp="2026-01-30 21:38:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:28.030457132 +0000 UTC m=+1446.776279781" watchObservedRunningTime="2026-01-30 21:38:28.038867837 +0000 UTC m=+1446.784690486" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.053656 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.096478 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" (UID: "5f34e0f2-89e9-44d7-8c8e-3ee12728b7de"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:28 crc kubenswrapper[4751]: W0130 21:38:28.121582 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8978647_a7c1_4e25_b9c9_114227c06b39.slice/crio-2f99bc73fbfc7d9583563a00884a12bf270505fadf2b5daa26dca51bca6913ea WatchSource:0}: Error finding container 2f99bc73fbfc7d9583563a00884a12bf270505fadf2b5daa26dca51bca6913ea: Status 404 returned error can't find the container with id 2f99bc73fbfc7d9583563a00884a12bf270505fadf2b5daa26dca51bca6913ea Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.123354 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" (UID: "5f34e0f2-89e9-44d7-8c8e-3ee12728b7de"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:28 crc kubenswrapper[4751]: W0130 21:38:28.136624 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod334843b7_3c66_42fa_8880_4337946df593.slice/crio-06ec7e7cf9e09728dc67ef2332efb7da9f793376db2500797f5053f3ce954b8e WatchSource:0}: Error finding container 06ec7e7cf9e09728dc67ef2332efb7da9f793376db2500797f5053f3ce954b8e: Status 404 returned error can't find the container with id 06ec7e7cf9e09728dc67ef2332efb7da9f793376db2500797f5053f3ce954b8e Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.156564 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.156593 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.280566 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5fd66f57b7-5jqls"] Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.280597 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5486cc9958-dvfn2" event={"ID":"5089359d-290c-4b07-80e4-0c4c73ffa8cd","Type":"ContainerStarted","Data":"744360fb184e7a689ab217ce4f6709b0ff7ab37b1bf6dc2a42f1d4e37e6d2d8f"} Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.280623 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.280634 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.280643 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-j4xm6"] Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.280654 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-56b859c9db-tvldd"] Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.280664 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-577c4d4496-28rjx"] Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.348387 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-trt9f"] Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.349304 4751 scope.go:117] "RemoveContainer" containerID="a53508f839991fbdcbf4f267b810010ea1fe74ec4a4881adda9c8e4964af9678" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.403889 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-trt9f"] Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.438459 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6bcbb59b46-2xhmj"] Jan 30 21:38:28 crc kubenswrapper[4751]: E0130 21:38:28.439071 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" containerName="dnsmasq-dns" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.439091 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" containerName="dnsmasq-dns" Jan 30 21:38:28 crc kubenswrapper[4751]: E0130 21:38:28.439106 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" containerName="init" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.439111 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" containerName="init" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.439334 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" containerName="dnsmasq-dns" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.446791 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.461512 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6bcbb59b46-2xhmj"] Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.566267 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x8gs\" (UniqueName: \"kubernetes.io/projected/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-kube-api-access-2x8gs\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.566420 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-combined-ca-bundle\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.566509 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-public-tls-certs\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.566537 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-logs\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.566629 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-internal-tls-certs\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.566773 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-config-data\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.566832 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-scripts\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.668408 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-combined-ca-bundle\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.668459 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-public-tls-certs\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.668483 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-logs\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.668525 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-internal-tls-certs\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.668577 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-config-data\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.668602 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-scripts\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.668687 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x8gs\" (UniqueName: \"kubernetes.io/projected/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-kube-api-access-2x8gs\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.671063 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-logs\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.678967 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-combined-ca-bundle\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.683871 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-internal-tls-certs\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.692641 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-public-tls-certs\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.700621 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-scripts\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.718947 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-config-data\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.719015 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x8gs\" (UniqueName: \"kubernetes.io/projected/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-kube-api-access-2x8gs\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.791038 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.044795 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-bq6lp" event={"ID":"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24","Type":"ContainerStarted","Data":"bab22938ba50080dd9d55dae8178cb60ae0c855052ff172ac9ad37da3248c397"} Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.052257 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" event={"ID":"7a46895c-c496-4a55-b580-37e5118d467e","Type":"ContainerStarted","Data":"f69331b88c26a882279ae1095efcc5409673b2a358ee1f734b4039924d077292"} Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.060753 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-684d7cc675-gfk2w" event={"ID":"97e3362c-da50-4989-abe0-9dde0694c635","Type":"ContainerStarted","Data":"4c453d695d623dcfb13b6ad95951d2683a6fbb29c875d8bbb6a2715ff24c2c26"} Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.091584 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-55986d9fc9-zjsx4" event={"ID":"aab674da-e1ff-4881-9432-fad6b85111f2","Type":"ContainerStarted","Data":"26cc386e95d056a2e4ef9c20a02bf5ed78ff3b76c43d45c8275281e6f8bcc1c1"} Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.092414 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.115651 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-bq6lp" podStartSLOduration=5.06772528 podStartE2EDuration="52.115631177s" podCreationTimestamp="2026-01-30 21:37:37 +0000 UTC" firstStartedPulling="2026-01-30 21:37:39.553666235 +0000 UTC m=+1398.299488884" lastFinishedPulling="2026-01-30 21:38:26.601572132 +0000 UTC m=+1445.347394781" observedRunningTime="2026-01-30 21:38:29.069697716 +0000 UTC m=+1447.815520375" watchObservedRunningTime="2026-01-30 21:38:29.115631177 +0000 UTC m=+1447.861453826" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.116277 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-577c4d4496-28rjx" event={"ID":"49d33f4c-f33a-445b-90ab-795e750ecf2a","Type":"ContainerStarted","Data":"d8ef3143c3789b4413d5bf14187839a4bdae12b67fce8d578bbd94065a621c7d"} Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.116388 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-577c4d4496-28rjx" event={"ID":"49d33f4c-f33a-445b-90ab-795e750ecf2a","Type":"ContainerStarted","Data":"90765e22c210e8d4dca2167b620180485ee5c8cdff299ab3f9aa70131e4301fe"} Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.120223 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-55986d9fc9-zjsx4" podStartSLOduration=4.120209509 podStartE2EDuration="4.120209509s" podCreationTimestamp="2026-01-30 21:38:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:29.117219759 +0000 UTC m=+1447.863042408" watchObservedRunningTime="2026-01-30 21:38:29.120209509 +0000 UTC m=+1447.866032158" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.120716 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fd66f57b7-5jqls" event={"ID":"76562ec1-fb40-4590-9d96-f05cafc13640","Type":"ContainerStarted","Data":"3d06ccb2beb029cd8f729d6688245ef9156c53c373a572d41f8f38e1c44fcb1e"} Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.128920 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" event={"ID":"334843b7-3c66-42fa-8880-4337946df593","Type":"ContainerStarted","Data":"06ec7e7cf9e09728dc67ef2332efb7da9f793376db2500797f5053f3ce954b8e"} Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.145743 4751 generic.go:334] "Generic (PLEG): container finished" podID="e8978647-a7c1-4e25-b9c9-114227c06b39" containerID="e2f7dd40889c591749a184c2b14e879cac65ab57554ef2c3f648955df0d136e1" exitCode=0 Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.147069 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" event={"ID":"e8978647-a7c1-4e25-b9c9-114227c06b39","Type":"ContainerDied","Data":"e2f7dd40889c591749a184c2b14e879cac65ab57554ef2c3f648955df0d136e1"} Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.147115 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" event={"ID":"e8978647-a7c1-4e25-b9c9-114227c06b39","Type":"ContainerStarted","Data":"2f99bc73fbfc7d9583563a00884a12bf270505fadf2b5daa26dca51bca6913ea"} Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.350785 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6bcbb59b46-2xhmj"] Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.666426 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-b7f497ffb-fkntp"] Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.669596 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.673052 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.673113 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.697765 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b7f497ffb-fkntp"] Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.740476 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-config-data\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.740567 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chgk5\" (UniqueName: \"kubernetes.io/projected/1a2838e6-7563-4e97-893d-58d8619b780b-kube-api-access-chgk5\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.740637 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-internal-tls-certs\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.740666 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a2838e6-7563-4e97-893d-58d8619b780b-logs\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.740768 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-config-data-custom\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.740795 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-public-tls-certs\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.740845 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-combined-ca-bundle\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.850495 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-config-data-custom\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.850741 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-public-tls-certs\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.850862 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-combined-ca-bundle\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.850947 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-config-data\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.851085 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chgk5\" (UniqueName: \"kubernetes.io/projected/1a2838e6-7563-4e97-893d-58d8619b780b-kube-api-access-chgk5\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.851212 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-internal-tls-certs\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.851287 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a2838e6-7563-4e97-893d-58d8619b780b-logs\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.851983 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a2838e6-7563-4e97-893d-58d8619b780b-logs\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.856265 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-config-data\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.858104 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-config-data-custom\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.859123 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-combined-ca-bundle\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.861994 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-internal-tls-certs\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.862488 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-public-tls-certs\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.877136 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chgk5\" (UniqueName: \"kubernetes.io/projected/1a2838e6-7563-4e97-893d-58d8619b780b-kube-api-access-chgk5\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.988781 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:30 crc kubenswrapper[4751]: I0130 21:38:30.008669 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" path="/var/lib/kubelet/pods/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de/volumes" Jan 30 21:38:30 crc kubenswrapper[4751]: I0130 21:38:30.170452 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-577c4d4496-28rjx" event={"ID":"49d33f4c-f33a-445b-90ab-795e750ecf2a","Type":"ContainerStarted","Data":"59d12543bc5db0960672f133622c6e1a94aba99de71c925682294151943b6718"} Jan 30 21:38:30 crc kubenswrapper[4751]: I0130 21:38:30.172654 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:30 crc kubenswrapper[4751]: I0130 21:38:30.172685 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:30 crc kubenswrapper[4751]: I0130 21:38:30.176394 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bcbb59b46-2xhmj" event={"ID":"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377","Type":"ContainerStarted","Data":"bd6c8a9d0c83efa447a847c980995760c5179489f43d2cf67a1740b8cb7d57fe"} Jan 30 21:38:30 crc kubenswrapper[4751]: I0130 21:38:30.176420 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bcbb59b46-2xhmj" event={"ID":"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377","Type":"ContainerStarted","Data":"9274e10e8e5704f97aeac961429028f5763307728946938222ba53a21048f59c"} Jan 30 21:38:30 crc kubenswrapper[4751]: I0130 21:38:30.188116 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-577c4d4496-28rjx" podStartSLOduration=4.18808896 podStartE2EDuration="4.18808896s" podCreationTimestamp="2026-01-30 21:38:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:30.185259894 +0000 UTC m=+1448.931082543" watchObservedRunningTime="2026-01-30 21:38:30.18808896 +0000 UTC m=+1448.933911609" Jan 30 21:38:30 crc kubenswrapper[4751]: I0130 21:38:30.189908 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" event={"ID":"e8978647-a7c1-4e25-b9c9-114227c06b39","Type":"ContainerStarted","Data":"292f3855d41ee7d4a77843333bb14f65b77c2740ac544185c604a1ac171d5446"} Jan 30 21:38:30 crc kubenswrapper[4751]: I0130 21:38:30.189973 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:30 crc kubenswrapper[4751]: I0130 21:38:30.214758 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" podStartSLOduration=5.214741504 podStartE2EDuration="5.214741504s" podCreationTimestamp="2026-01-30 21:38:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:30.208384534 +0000 UTC m=+1448.954207183" watchObservedRunningTime="2026-01-30 21:38:30.214741504 +0000 UTC m=+1448.960564153" Jan 30 21:38:31 crc kubenswrapper[4751]: I0130 21:38:31.809697 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b7f497ffb-fkntp"] Jan 30 21:38:31 crc kubenswrapper[4751]: W0130 21:38:31.814716 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a2838e6_7563_4e97_893d_58d8619b780b.slice/crio-3a95f87bf212d1d853c04491e7a74f92e47d4df580b71362f2463bb0478a9f56 WatchSource:0}: Error finding container 3a95f87bf212d1d853c04491e7a74f92e47d4df580b71362f2463bb0478a9f56: Status 404 returned error can't find the container with id 3a95f87bf212d1d853c04491e7a74f92e47d4df580b71362f2463bb0478a9f56 Jan 30 21:38:32 crc kubenswrapper[4751]: I0130 21:38:32.230783 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bcbb59b46-2xhmj" event={"ID":"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377","Type":"ContainerStarted","Data":"1f26f6a6e1931ca0092bc20d95f468f5ae4990cb949e330792df3d8104286def"} Jan 30 21:38:32 crc kubenswrapper[4751]: I0130 21:38:32.231057 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:32 crc kubenswrapper[4751]: I0130 21:38:32.231313 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:32 crc kubenswrapper[4751]: I0130 21:38:32.232176 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b7f497ffb-fkntp" event={"ID":"1a2838e6-7563-4e97-893d-58d8619b780b","Type":"ContainerStarted","Data":"3a95f87bf212d1d853c04491e7a74f92e47d4df580b71362f2463bb0478a9f56"} Jan 30 21:38:32 crc kubenswrapper[4751]: I0130 21:38:32.235049 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fd66f57b7-5jqls" event={"ID":"76562ec1-fb40-4590-9d96-f05cafc13640","Type":"ContainerStarted","Data":"90029ee8bdf14aa08508a916b80fe7db8b07879c4b6c630021e9d41a67d33f3d"} Jan 30 21:38:32 crc kubenswrapper[4751]: I0130 21:38:32.243596 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" event={"ID":"334843b7-3c66-42fa-8880-4337946df593","Type":"ContainerStarted","Data":"7e3e733bab2217b3c084197ff1c8d01b30aa851c87ac55908e0a6c87ddc58079"} Jan 30 21:38:32 crc kubenswrapper[4751]: I0130 21:38:32.254850 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" event={"ID":"7a46895c-c496-4a55-b580-37e5118d467e","Type":"ContainerStarted","Data":"0bb2cc6dca6986bdb5a77526a0076115cd791dc2be166bdef0afca55706c64d0"} Jan 30 21:38:32 crc kubenswrapper[4751]: I0130 21:38:32.259858 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-684d7cc675-gfk2w" event={"ID":"97e3362c-da50-4989-abe0-9dde0694c635","Type":"ContainerStarted","Data":"cdea15480a78958320e248ecfbbebe1a3f0521e65da5ed371170e13162407535"} Jan 30 21:38:32 crc kubenswrapper[4751]: I0130 21:38:32.262765 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6bcbb59b46-2xhmj" podStartSLOduration=4.262746025 podStartE2EDuration="4.262746025s" podCreationTimestamp="2026-01-30 21:38:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:32.25319317 +0000 UTC m=+1450.999015819" watchObservedRunningTime="2026-01-30 21:38:32.262746025 +0000 UTC m=+1451.008568674" Jan 30 21:38:33 crc kubenswrapper[4751]: I0130 21:38:33.272356 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b7f497ffb-fkntp" event={"ID":"1a2838e6-7563-4e97-893d-58d8619b780b","Type":"ContainerStarted","Data":"19b85bac9b784cc771b4cef45d6b17580f604bf4aa31f641960ca4c684c0b355"} Jan 30 21:38:33 crc kubenswrapper[4751]: I0130 21:38:33.274881 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fd66f57b7-5jqls" event={"ID":"76562ec1-fb40-4590-9d96-f05cafc13640","Type":"ContainerStarted","Data":"ba8cfedf0c994501fb1ef5f7240d8c1603e547c5b339c8de5470c13c61b42fbc"} Jan 30 21:38:33 crc kubenswrapper[4751]: I0130 21:38:33.277318 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" event={"ID":"334843b7-3c66-42fa-8880-4337946df593","Type":"ContainerStarted","Data":"0b2955b3e35fa944ef58f65304e69ed3fba383de470d48a45c715701012b5a7d"} Jan 30 21:38:33 crc kubenswrapper[4751]: I0130 21:38:33.281216 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" event={"ID":"7a46895c-c496-4a55-b580-37e5118d467e","Type":"ContainerStarted","Data":"959811e3ff13503eafe6cff50a8c03fc84e50fb6ff23ae19bd94fccb9c4b2d25"} Jan 30 21:38:33 crc kubenswrapper[4751]: I0130 21:38:33.284558 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-684d7cc675-gfk2w" event={"ID":"97e3362c-da50-4989-abe0-9dde0694c635","Type":"ContainerStarted","Data":"52c38d4fff52ee815c69164766a6fb7e40fd7fe7b4e65c636741695adcfc586f"} Jan 30 21:38:33 crc kubenswrapper[4751]: I0130 21:38:33.298763 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5fd66f57b7-5jqls" podStartSLOduration=4.21649005 podStartE2EDuration="7.298744932s" podCreationTimestamp="2026-01-30 21:38:26 +0000 UTC" firstStartedPulling="2026-01-30 21:38:28.123832523 +0000 UTC m=+1446.869655172" lastFinishedPulling="2026-01-30 21:38:31.206087405 +0000 UTC m=+1449.951910054" observedRunningTime="2026-01-30 21:38:33.292361371 +0000 UTC m=+1452.038184020" watchObservedRunningTime="2026-01-30 21:38:33.298744932 +0000 UTC m=+1452.044567581" Jan 30 21:38:33 crc kubenswrapper[4751]: I0130 21:38:33.311617 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" podStartSLOduration=4.249224338 podStartE2EDuration="7.311588747s" podCreationTimestamp="2026-01-30 21:38:26 +0000 UTC" firstStartedPulling="2026-01-30 21:38:28.144704082 +0000 UTC m=+1446.890526731" lastFinishedPulling="2026-01-30 21:38:31.207068481 +0000 UTC m=+1449.952891140" observedRunningTime="2026-01-30 21:38:33.306697835 +0000 UTC m=+1452.052520484" watchObservedRunningTime="2026-01-30 21:38:33.311588747 +0000 UTC m=+1452.057411436" Jan 30 21:38:33 crc kubenswrapper[4751]: I0130 21:38:33.346905 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-684d7cc675-gfk2w"] Jan 30 21:38:33 crc kubenswrapper[4751]: I0130 21:38:33.350015 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" podStartSLOduration=4.946301883 podStartE2EDuration="8.349992844s" podCreationTimestamp="2026-01-30 21:38:25 +0000 UTC" firstStartedPulling="2026-01-30 21:38:27.786990741 +0000 UTC m=+1446.532813390" lastFinishedPulling="2026-01-30 21:38:31.190681702 +0000 UTC m=+1449.936504351" observedRunningTime="2026-01-30 21:38:33.326843415 +0000 UTC m=+1452.072666064" watchObservedRunningTime="2026-01-30 21:38:33.349992844 +0000 UTC m=+1452.095815483" Jan 30 21:38:33 crc kubenswrapper[4751]: I0130 21:38:33.365678 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-684d7cc675-gfk2w" podStartSLOduration=4.912632712 podStartE2EDuration="8.365661855s" podCreationTimestamp="2026-01-30 21:38:25 +0000 UTC" firstStartedPulling="2026-01-30 21:38:27.755887978 +0000 UTC m=+1446.501710627" lastFinishedPulling="2026-01-30 21:38:31.208917121 +0000 UTC m=+1449.954739770" observedRunningTime="2026-01-30 21:38:33.36063963 +0000 UTC m=+1452.106462299" watchObservedRunningTime="2026-01-30 21:38:33.365661855 +0000 UTC m=+1452.111484504" Jan 30 21:38:33 crc kubenswrapper[4751]: I0130 21:38:33.387139 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-599c9789d8-7n2xt"] Jan 30 21:38:34 crc kubenswrapper[4751]: I0130 21:38:34.300493 4751 generic.go:334] "Generic (PLEG): container finished" podID="1051dd3c-5d30-47f1-8162-3a3e9d5ee271" containerID="5e15084dd70b42693552f9d64b22474ea93dd026e14a53bb39cd74bd8ba86b97" exitCode=0 Jan 30 21:38:34 crc kubenswrapper[4751]: I0130 21:38:34.300582 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-npwgd" event={"ID":"1051dd3c-5d30-47f1-8162-3a3e9d5ee271","Type":"ContainerDied","Data":"5e15084dd70b42693552f9d64b22474ea93dd026e14a53bb39cd74bd8ba86b97"} Jan 30 21:38:35 crc kubenswrapper[4751]: I0130 21:38:35.311828 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-684d7cc675-gfk2w" podUID="97e3362c-da50-4989-abe0-9dde0694c635" containerName="barbican-worker" containerID="cri-o://52c38d4fff52ee815c69164766a6fb7e40fd7fe7b4e65c636741695adcfc586f" gracePeriod=30 Jan 30 21:38:35 crc kubenswrapper[4751]: I0130 21:38:35.312058 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-684d7cc675-gfk2w" podUID="97e3362c-da50-4989-abe0-9dde0694c635" containerName="barbican-worker-log" containerID="cri-o://cdea15480a78958320e248ecfbbebe1a3f0521e65da5ed371170e13162407535" gracePeriod=30 Jan 30 21:38:35 crc kubenswrapper[4751]: I0130 21:38:35.312241 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" podUID="7a46895c-c496-4a55-b580-37e5118d467e" containerName="barbican-keystone-listener-log" containerID="cri-o://0bb2cc6dca6986bdb5a77526a0076115cd791dc2be166bdef0afca55706c64d0" gracePeriod=30 Jan 30 21:38:35 crc kubenswrapper[4751]: I0130 21:38:35.312288 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" podUID="7a46895c-c496-4a55-b580-37e5118d467e" containerName="barbican-keystone-listener" containerID="cri-o://959811e3ff13503eafe6cff50a8c03fc84e50fb6ff23ae19bd94fccb9c4b2d25" gracePeriod=30 Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.354621 4751 generic.go:334] "Generic (PLEG): container finished" podID="7a46895c-c496-4a55-b580-37e5118d467e" containerID="959811e3ff13503eafe6cff50a8c03fc84e50fb6ff23ae19bd94fccb9c4b2d25" exitCode=0 Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.354920 4751 generic.go:334] "Generic (PLEG): container finished" podID="7a46895c-c496-4a55-b580-37e5118d467e" containerID="0bb2cc6dca6986bdb5a77526a0076115cd791dc2be166bdef0afca55706c64d0" exitCode=143 Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.354986 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" event={"ID":"7a46895c-c496-4a55-b580-37e5118d467e","Type":"ContainerDied","Data":"959811e3ff13503eafe6cff50a8c03fc84e50fb6ff23ae19bd94fccb9c4b2d25"} Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.355012 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" event={"ID":"7a46895c-c496-4a55-b580-37e5118d467e","Type":"ContainerDied","Data":"0bb2cc6dca6986bdb5a77526a0076115cd791dc2be166bdef0afca55706c64d0"} Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.362042 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-npwgd" event={"ID":"1051dd3c-5d30-47f1-8162-3a3e9d5ee271","Type":"ContainerDied","Data":"61ca5d5ef115983d440cf3f223c2366b80c380ae13e04f479bd30ca5a18ae1d4"} Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.362308 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61ca5d5ef115983d440cf3f223c2366b80c380ae13e04f479bd30ca5a18ae1d4" Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.374394 4751 generic.go:334] "Generic (PLEG): container finished" podID="97e3362c-da50-4989-abe0-9dde0694c635" containerID="52c38d4fff52ee815c69164766a6fb7e40fd7fe7b4e65c636741695adcfc586f" exitCode=0 Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.374427 4751 generic.go:334] "Generic (PLEG): container finished" podID="97e3362c-da50-4989-abe0-9dde0694c635" containerID="cdea15480a78958320e248ecfbbebe1a3f0521e65da5ed371170e13162407535" exitCode=143 Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.374451 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-684d7cc675-gfk2w" event={"ID":"97e3362c-da50-4989-abe0-9dde0694c635","Type":"ContainerDied","Data":"52c38d4fff52ee815c69164766a6fb7e40fd7fe7b4e65c636741695adcfc586f"} Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.374479 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-684d7cc675-gfk2w" event={"ID":"97e3362c-da50-4989-abe0-9dde0694c635","Type":"ContainerDied","Data":"cdea15480a78958320e248ecfbbebe1a3f0521e65da5ed371170e13162407535"} Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.420389 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-npwgd" Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.518841 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-config-data\") pod \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\" (UID: \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\") " Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.518985 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-combined-ca-bundle\") pod \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\" (UID: \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\") " Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.519036 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxctb\" (UniqueName: \"kubernetes.io/projected/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-kube-api-access-vxctb\") pod \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\" (UID: \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\") " Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.531591 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-kube-api-access-vxctb" (OuterVolumeSpecName: "kube-api-access-vxctb") pod "1051dd3c-5d30-47f1-8162-3a3e9d5ee271" (UID: "1051dd3c-5d30-47f1-8162-3a3e9d5ee271"). InnerVolumeSpecName "kube-api-access-vxctb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.575581 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1051dd3c-5d30-47f1-8162-3a3e9d5ee271" (UID: "1051dd3c-5d30-47f1-8162-3a3e9d5ee271"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.620866 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.620904 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxctb\" (UniqueName: \"kubernetes.io/projected/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-kube-api-access-vxctb\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.661236 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-config-data" (OuterVolumeSpecName: "config-data") pod "1051dd3c-5d30-47f1-8162-3a3e9d5ee271" (UID: "1051dd3c-5d30-47f1-8162-3a3e9d5ee271"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.702503 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.724623 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.767159 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-5x59j"] Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.767413 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" podUID="714bda18-396a-4c61-b32c-28c97f9212c7" containerName="dnsmasq-dns" containerID="cri-o://d75cd44bc174bd1c8fb960d6b48079304f29a447af1f96a3c4feb1e101ec22b3" gracePeriod=10 Jan 30 21:38:37 crc kubenswrapper[4751]: I0130 21:38:37.388236 4751 generic.go:334] "Generic (PLEG): container finished" podID="714bda18-396a-4c61-b32c-28c97f9212c7" containerID="d75cd44bc174bd1c8fb960d6b48079304f29a447af1f96a3c4feb1e101ec22b3" exitCode=0 Jan 30 21:38:37 crc kubenswrapper[4751]: I0130 21:38:37.388291 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" event={"ID":"714bda18-396a-4c61-b32c-28c97f9212c7","Type":"ContainerDied","Data":"d75cd44bc174bd1c8fb960d6b48079304f29a447af1f96a3c4feb1e101ec22b3"} Jan 30 21:38:37 crc kubenswrapper[4751]: I0130 21:38:37.388636 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-npwgd" Jan 30 21:38:38 crc kubenswrapper[4751]: I0130 21:38:38.688988 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-586n4" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="registry-server" probeResult="failure" output=< Jan 30 21:38:38 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:38:38 crc kubenswrapper[4751]: > Jan 30 21:38:39 crc kubenswrapper[4751]: I0130 21:38:39.029930 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" podUID="714bda18-396a-4c61-b32c-28c97f9212c7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.192:5353: connect: connection refused" Jan 30 21:38:39 crc kubenswrapper[4751]: I0130 21:38:39.430501 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-bq6lp" event={"ID":"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24","Type":"ContainerDied","Data":"bab22938ba50080dd9d55dae8178cb60ae0c855052ff172ac9ad37da3248c397"} Jan 30 21:38:39 crc kubenswrapper[4751]: I0130 21:38:39.430442 4751 generic.go:334] "Generic (PLEG): container finished" podID="564f3d8f-4b9f-4fe2-9464-baa31d6b7d24" containerID="bab22938ba50080dd9d55dae8178cb60ae0c855052ff172ac9ad37da3248c397" exitCode=0 Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.174238 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.280436 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-config-data-custom\") pod \"7a46895c-c496-4a55-b580-37e5118d467e\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.280521 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a46895c-c496-4a55-b580-37e5118d467e-logs\") pod \"7a46895c-c496-4a55-b580-37e5118d467e\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.280713 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg9wq\" (UniqueName: \"kubernetes.io/projected/7a46895c-c496-4a55-b580-37e5118d467e-kube-api-access-sg9wq\") pod \"7a46895c-c496-4a55-b580-37e5118d467e\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.280949 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a46895c-c496-4a55-b580-37e5118d467e-logs" (OuterVolumeSpecName: "logs") pod "7a46895c-c496-4a55-b580-37e5118d467e" (UID: "7a46895c-c496-4a55-b580-37e5118d467e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.281017 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-combined-ca-bundle\") pod \"7a46895c-c496-4a55-b580-37e5118d467e\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.281130 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-config-data\") pod \"7a46895c-c496-4a55-b580-37e5118d467e\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.281742 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a46895c-c496-4a55-b580-37e5118d467e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.296615 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a46895c-c496-4a55-b580-37e5118d467e-kube-api-access-sg9wq" (OuterVolumeSpecName: "kube-api-access-sg9wq") pod "7a46895c-c496-4a55-b580-37e5118d467e" (UID: "7a46895c-c496-4a55-b580-37e5118d467e"). InnerVolumeSpecName "kube-api-access-sg9wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.296763 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7a46895c-c496-4a55-b580-37e5118d467e" (UID: "7a46895c-c496-4a55-b580-37e5118d467e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.331066 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a46895c-c496-4a55-b580-37e5118d467e" (UID: "7a46895c-c496-4a55-b580-37e5118d467e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.383010 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.383045 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.383054 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg9wq\" (UniqueName: \"kubernetes.io/projected/7a46895c-c496-4a55-b580-37e5118d467e-kube-api-access-sg9wq\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.413013 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-config-data" (OuterVolumeSpecName: "config-data") pod "7a46895c-c496-4a55-b580-37e5118d467e" (UID: "7a46895c-c496-4a55-b580-37e5118d467e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.467657 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.468633 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" event={"ID":"7a46895c-c496-4a55-b580-37e5118d467e","Type":"ContainerDied","Data":"f69331b88c26a882279ae1095efcc5409673b2a358ee1f734b4039924d077292"} Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.468748 4751 scope.go:117] "RemoveContainer" containerID="959811e3ff13503eafe6cff50a8c03fc84e50fb6ff23ae19bd94fccb9c4b2d25" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.485178 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.564911 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-599c9789d8-7n2xt"] Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.565101 4751 scope.go:117] "RemoveContainer" containerID="0bb2cc6dca6986bdb5a77526a0076115cd791dc2be166bdef0afca55706c64d0" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.596356 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.607748 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-599c9789d8-7n2xt"] Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.669863 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.683513 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.706647 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.792809 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-ovsdbserver-sb\") pod \"714bda18-396a-4c61-b32c-28c97f9212c7\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.792916 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc98s\" (UniqueName: \"kubernetes.io/projected/714bda18-396a-4c61-b32c-28c97f9212c7-kube-api-access-pc98s\") pod \"714bda18-396a-4c61-b32c-28c97f9212c7\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.793155 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-dns-swift-storage-0\") pod \"714bda18-396a-4c61-b32c-28c97f9212c7\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.793180 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-config\") pod \"714bda18-396a-4c61-b32c-28c97f9212c7\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.793245 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-ovsdbserver-nb\") pod \"714bda18-396a-4c61-b32c-28c97f9212c7\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.793279 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-dns-svc\") pod \"714bda18-396a-4c61-b32c-28c97f9212c7\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.800357 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/714bda18-396a-4c61-b32c-28c97f9212c7-kube-api-access-pc98s" (OuterVolumeSpecName: "kube-api-access-pc98s") pod "714bda18-396a-4c61-b32c-28c97f9212c7" (UID: "714bda18-396a-4c61-b32c-28c97f9212c7"). InnerVolumeSpecName "kube-api-access-pc98s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: E0130 21:38:40.836883 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.876184 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "714bda18-396a-4c61-b32c-28c97f9212c7" (UID: "714bda18-396a-4c61-b32c-28c97f9212c7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.879871 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "714bda18-396a-4c61-b32c-28c97f9212c7" (UID: "714bda18-396a-4c61-b32c-28c97f9212c7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.885543 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "714bda18-396a-4c61-b32c-28c97f9212c7" (UID: "714bda18-396a-4c61-b32c-28c97f9212c7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.893268 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "714bda18-396a-4c61-b32c-28c97f9212c7" (UID: "714bda18-396a-4c61-b32c-28c97f9212c7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.896832 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-config-data-custom\") pod \"97e3362c-da50-4989-abe0-9dde0694c635\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.896960 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97e3362c-da50-4989-abe0-9dde0694c635-logs\") pod \"97e3362c-da50-4989-abe0-9dde0694c635\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.897004 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-config-data\") pod \"97e3362c-da50-4989-abe0-9dde0694c635\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.897094 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-combined-ca-bundle\") pod \"97e3362c-da50-4989-abe0-9dde0694c635\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.897212 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88fh5\" (UniqueName: \"kubernetes.io/projected/97e3362c-da50-4989-abe0-9dde0694c635-kube-api-access-88fh5\") pod \"97e3362c-da50-4989-abe0-9dde0694c635\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.897499 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97e3362c-da50-4989-abe0-9dde0694c635-logs" (OuterVolumeSpecName: "logs") pod "97e3362c-da50-4989-abe0-9dde0694c635" (UID: "97e3362c-da50-4989-abe0-9dde0694c635"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.898116 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.898131 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.898143 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.898151 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc98s\" (UniqueName: \"kubernetes.io/projected/714bda18-396a-4c61-b32c-28c97f9212c7-kube-api-access-pc98s\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.898160 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97e3362c-da50-4989-abe0-9dde0694c635-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.898168 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.909553 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "97e3362c-da50-4989-abe0-9dde0694c635" (UID: "97e3362c-da50-4989-abe0-9dde0694c635"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.911628 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97e3362c-da50-4989-abe0-9dde0694c635-kube-api-access-88fh5" (OuterVolumeSpecName: "kube-api-access-88fh5") pod "97e3362c-da50-4989-abe0-9dde0694c635" (UID: "97e3362c-da50-4989-abe0-9dde0694c635"). InnerVolumeSpecName "kube-api-access-88fh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.913364 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-config" (OuterVolumeSpecName: "config") pod "714bda18-396a-4c61-b32c-28c97f9212c7" (UID: "714bda18-396a-4c61-b32c-28c97f9212c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.927065 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97e3362c-da50-4989-abe0-9dde0694c635" (UID: "97e3362c-da50-4989-abe0-9dde0694c635"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.933344 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.997140 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-config-data" (OuterVolumeSpecName: "config-data") pod "97e3362c-da50-4989-abe0-9dde0694c635" (UID: "97e3362c-da50-4989-abe0-9dde0694c635"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.002533 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.002662 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.002678 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.002698 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.002713 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88fh5\" (UniqueName: \"kubernetes.io/projected/97e3362c-da50-4989-abe0-9dde0694c635-kube-api-access-88fh5\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.105405 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-etc-machine-id\") pod \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.105715 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xqxr\" (UniqueName: \"kubernetes.io/projected/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-kube-api-access-4xqxr\") pod \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.105748 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-config-data\") pod \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.105549 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "564f3d8f-4b9f-4fe2-9464-baa31d6b7d24" (UID: "564f3d8f-4b9f-4fe2-9464-baa31d6b7d24"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.105818 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-combined-ca-bundle\") pod \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.105910 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-db-sync-config-data\") pod \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.105981 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-scripts\") pod \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.106719 4751 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.113615 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-kube-api-access-4xqxr" (OuterVolumeSpecName: "kube-api-access-4xqxr") pod "564f3d8f-4b9f-4fe2-9464-baa31d6b7d24" (UID: "564f3d8f-4b9f-4fe2-9464-baa31d6b7d24"). InnerVolumeSpecName "kube-api-access-4xqxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.114595 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "564f3d8f-4b9f-4fe2-9464-baa31d6b7d24" (UID: "564f3d8f-4b9f-4fe2-9464-baa31d6b7d24"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.117478 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-scripts" (OuterVolumeSpecName: "scripts") pod "564f3d8f-4b9f-4fe2-9464-baa31d6b7d24" (UID: "564f3d8f-4b9f-4fe2-9464-baa31d6b7d24"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.143294 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "564f3d8f-4b9f-4fe2-9464-baa31d6b7d24" (UID: "564f3d8f-4b9f-4fe2-9464-baa31d6b7d24"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.176743 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-config-data" (OuterVolumeSpecName: "config-data") pod "564f3d8f-4b9f-4fe2-9464-baa31d6b7d24" (UID: "564f3d8f-4b9f-4fe2-9464-baa31d6b7d24"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.210490 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.210737 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xqxr\" (UniqueName: \"kubernetes.io/projected/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-kube-api-access-4xqxr\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.210796 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.210860 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.210917 4751 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.485602 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.485586 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-bq6lp" event={"ID":"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24","Type":"ContainerDied","Data":"d48985825eedc61af18140d62898c9c9236f51e33569add314f3a9440bbd00d5"} Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.486293 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d48985825eedc61af18140d62898c9c9236f51e33569add314f3a9440bbd00d5" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.492625 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.492638 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-684d7cc675-gfk2w" event={"ID":"97e3362c-da50-4989-abe0-9dde0694c635","Type":"ContainerDied","Data":"4c453d695d623dcfb13b6ad95951d2683a6fbb29c875d8bbb6a2715ff24c2c26"} Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.492705 4751 scope.go:117] "RemoveContainer" containerID="52c38d4fff52ee815c69164766a6fb7e40fd7fe7b4e65c636741695adcfc586f" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.504778 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36866d1c-b1a0-4d3e-a87f-f5901b053bb5","Type":"ContainerStarted","Data":"cc90a4e11e1d029eeee34f73917760869533380772dddf04e894fab5c930872e"} Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.504972 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerName="ceilometer-notification-agent" containerID="cri-o://e8f97afcbe8ddffd29b942ae042e5fce17dc8861bb8e88d2fe1fb8cc9e3afcd5" gracePeriod=30 Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.505048 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.505760 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerName="proxy-httpd" containerID="cri-o://cc90a4e11e1d029eeee34f73917760869533380772dddf04e894fab5c930872e" gracePeriod=30 Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.505820 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerName="sg-core" containerID="cri-o://3927c7e8f046b132aa5530dcb2faa3362f1bfb4c014417fce75dde0984b299c0" gracePeriod=30 Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.527153 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" event={"ID":"714bda18-396a-4c61-b32c-28c97f9212c7","Type":"ContainerDied","Data":"d95ab28a617fe672cc0a279ffe8dbf4ffcdc0187184b974eff4f150e1720495f"} Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.528171 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.531232 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b7f497ffb-fkntp" event={"ID":"1a2838e6-7563-4e97-893d-58d8619b780b","Type":"ContainerStarted","Data":"f80d2951cd9e934acc6b9716e4aedcf612bbf8e8106ef290bbc72dadd9b128a4"} Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.531304 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.531578 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.583577 4751 scope.go:117] "RemoveContainer" containerID="cdea15480a78958320e248ecfbbebe1a3f0521e65da5ed371170e13162407535" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.618289 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-b7f497ffb-fkntp" podStartSLOduration=12.618262793 podStartE2EDuration="12.618262793s" podCreationTimestamp="2026-01-30 21:38:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:41.566104626 +0000 UTC m=+1460.311927275" watchObservedRunningTime="2026-01-30 21:38:41.618262793 +0000 UTC m=+1460.364085442" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.620155 4751 scope.go:117] "RemoveContainer" containerID="d75cd44bc174bd1c8fb960d6b48079304f29a447af1f96a3c4feb1e101ec22b3" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.653776 4751 scope.go:117] "RemoveContainer" containerID="ef63bfb279b0f69282ea04ec8633731532edcb149df478089ae1d0918490a1d0" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.657742 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-684d7cc675-gfk2w"] Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.669594 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-684d7cc675-gfk2w"] Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.690898 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-5x59j"] Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.753926 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-5x59j"] Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801072 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 21:38:41 crc kubenswrapper[4751]: E0130 21:38:41.801556 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97e3362c-da50-4989-abe0-9dde0694c635" containerName="barbican-worker-log" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801569 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="97e3362c-da50-4989-abe0-9dde0694c635" containerName="barbican-worker-log" Jan 30 21:38:41 crc kubenswrapper[4751]: E0130 21:38:41.801583 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="714bda18-396a-4c61-b32c-28c97f9212c7" containerName="init" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801589 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="714bda18-396a-4c61-b32c-28c97f9212c7" containerName="init" Jan 30 21:38:41 crc kubenswrapper[4751]: E0130 21:38:41.801599 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1051dd3c-5d30-47f1-8162-3a3e9d5ee271" containerName="heat-db-sync" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801606 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1051dd3c-5d30-47f1-8162-3a3e9d5ee271" containerName="heat-db-sync" Jan 30 21:38:41 crc kubenswrapper[4751]: E0130 21:38:41.801617 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97e3362c-da50-4989-abe0-9dde0694c635" containerName="barbican-worker" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801623 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="97e3362c-da50-4989-abe0-9dde0694c635" containerName="barbican-worker" Jan 30 21:38:41 crc kubenswrapper[4751]: E0130 21:38:41.801631 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a46895c-c496-4a55-b580-37e5118d467e" containerName="barbican-keystone-listener" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801637 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a46895c-c496-4a55-b580-37e5118d467e" containerName="barbican-keystone-listener" Jan 30 21:38:41 crc kubenswrapper[4751]: E0130 21:38:41.801657 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="714bda18-396a-4c61-b32c-28c97f9212c7" containerName="dnsmasq-dns" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801663 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="714bda18-396a-4c61-b32c-28c97f9212c7" containerName="dnsmasq-dns" Jan 30 21:38:41 crc kubenswrapper[4751]: E0130 21:38:41.801678 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="564f3d8f-4b9f-4fe2-9464-baa31d6b7d24" containerName="cinder-db-sync" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801684 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="564f3d8f-4b9f-4fe2-9464-baa31d6b7d24" containerName="cinder-db-sync" Jan 30 21:38:41 crc kubenswrapper[4751]: E0130 21:38:41.801704 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a46895c-c496-4a55-b580-37e5118d467e" containerName="barbican-keystone-listener-log" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801710 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a46895c-c496-4a55-b580-37e5118d467e" containerName="barbican-keystone-listener-log" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801923 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a46895c-c496-4a55-b580-37e5118d467e" containerName="barbican-keystone-listener" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801940 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="97e3362c-da50-4989-abe0-9dde0694c635" containerName="barbican-worker-log" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801956 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="1051dd3c-5d30-47f1-8162-3a3e9d5ee271" containerName="heat-db-sync" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801963 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a46895c-c496-4a55-b580-37e5118d467e" containerName="barbican-keystone-listener-log" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801975 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="97e3362c-da50-4989-abe0-9dde0694c635" containerName="barbican-worker" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801986 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="714bda18-396a-4c61-b32c-28c97f9212c7" containerName="dnsmasq-dns" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801999 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="564f3d8f-4b9f-4fe2-9464-baa31d6b7d24" containerName="cinder-db-sync" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.803178 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.807741 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-s756f" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.808582 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.808710 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.810818 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.826277 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.868575 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-hb44m"] Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.870490 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.914932 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-hb44m"] Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.960721 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-scripts\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.961001 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-config-data\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.961034 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.961116 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.961159 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/81228544-ce67-44f1-b4e0-6a218e154363-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.961256 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbtqt\" (UniqueName: \"kubernetes.io/projected/81228544-ce67-44f1-b4e0-6a218e154363-kube-api-access-bbtqt\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.007703 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="714bda18-396a-4c61-b32c-28c97f9212c7" path="/var/lib/kubelet/pods/714bda18-396a-4c61-b32c-28c97f9212c7/volumes" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.008530 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a46895c-c496-4a55-b580-37e5118d467e" path="/var/lib/kubelet/pods/7a46895c-c496-4a55-b580-37e5118d467e/volumes" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.009129 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97e3362c-da50-4989-abe0-9dde0694c635" path="/var/lib/kubelet/pods/97e3362c-da50-4989-abe0-9dde0694c635/volumes" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.063690 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-dns-svc\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.063752 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/81228544-ce67-44f1-b4e0-6a218e154363-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.063806 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-config\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.063833 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4z8f\" (UniqueName: \"kubernetes.io/projected/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-kube-api-access-z4z8f\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.063871 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbtqt\" (UniqueName: \"kubernetes.io/projected/81228544-ce67-44f1-b4e0-6a218e154363-kube-api-access-bbtqt\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.063966 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.063991 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-scripts\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.064009 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.064036 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-config-data\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.064059 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.064106 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.064129 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.064995 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/81228544-ce67-44f1-b4e0-6a218e154363-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.074756 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-config-data\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.076102 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.077123 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-scripts\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.086780 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.090111 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.093615 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbtqt\" (UniqueName: \"kubernetes.io/projected/81228544-ce67-44f1-b4e0-6a218e154363-kube-api-access-bbtqt\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.094912 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.099370 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.118810 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.140637 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.167625 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.167686 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-dns-svc\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.167726 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-config\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.167756 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4z8f\" (UniqueName: \"kubernetes.io/projected/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-kube-api-access-z4z8f\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.167884 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.167909 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.169036 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.172926 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.184420 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-config\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.185551 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-dns-svc\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.185706 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.191293 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4z8f\" (UniqueName: \"kubernetes.io/projected/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-kube-api-access-z4z8f\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.229140 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.269517 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e764852-fe70-4844-a2f2-53e15c45d4c1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.269569 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-config-data\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.269588 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e764852-fe70-4844-a2f2-53e15c45d4c1-logs\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.269627 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-config-data-custom\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.269673 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-scripts\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.269747 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.269809 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc746\" (UniqueName: \"kubernetes.io/projected/2e764852-fe70-4844-a2f2-53e15c45d4c1-kube-api-access-vc746\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.374153 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.376853 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc746\" (UniqueName: \"kubernetes.io/projected/2e764852-fe70-4844-a2f2-53e15c45d4c1-kube-api-access-vc746\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.376980 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e764852-fe70-4844-a2f2-53e15c45d4c1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.377045 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-config-data\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.377067 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e764852-fe70-4844-a2f2-53e15c45d4c1-logs\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.377217 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-config-data-custom\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.377374 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-scripts\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.378512 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e764852-fe70-4844-a2f2-53e15c45d4c1-logs\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.379371 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e764852-fe70-4844-a2f2-53e15c45d4c1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.381829 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.382188 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-scripts\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.388908 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-config-data\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.389874 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-config-data-custom\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.400509 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc746\" (UniqueName: \"kubernetes.io/projected/2e764852-fe70-4844-a2f2-53e15c45d4c1-kube-api-access-vc746\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.556903 4751 generic.go:334] "Generic (PLEG): container finished" podID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerID="cc90a4e11e1d029eeee34f73917760869533380772dddf04e894fab5c930872e" exitCode=0 Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.556932 4751 generic.go:334] "Generic (PLEG): container finished" podID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerID="3927c7e8f046b132aa5530dcb2faa3362f1bfb4c014417fce75dde0984b299c0" exitCode=2 Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.556982 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36866d1c-b1a0-4d3e-a87f-f5901b053bb5","Type":"ContainerDied","Data":"cc90a4e11e1d029eeee34f73917760869533380772dddf04e894fab5c930872e"} Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.557008 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36866d1c-b1a0-4d3e-a87f-f5901b053bb5","Type":"ContainerDied","Data":"3927c7e8f046b132aa5530dcb2faa3362f1bfb4c014417fce75dde0984b299c0"} Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.588757 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.777345 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.789171 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.917773 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-hb44m"] Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.076801 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5db486d6f7-9jq9s"] Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.077772 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5db486d6f7-9jq9s" podUID="5b32add6-b9f7-4e57-9dc8-ea71dbc40276" containerName="neutron-api" containerID="cri-o://ffe40f5beac55335ed5a0e5ca3f2b87505ff8c1d5062b9daa9f817649c1fad14" gracePeriod=30 Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.079742 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5db486d6f7-9jq9s" podUID="5b32add6-b9f7-4e57-9dc8-ea71dbc40276" containerName="neutron-httpd" containerID="cri-o://43f0c7937886815d9f7975ac5a567ad9805f039d1f79d20965dca1643fdccbec" gracePeriod=30 Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.102651 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5db486d6f7-9jq9s" podUID="5b32add6-b9f7-4e57-9dc8-ea71dbc40276" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.202:9696/\": EOF" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.111255 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6989c95c85-6thsl"] Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.114119 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.139424 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6989c95c85-6thsl"] Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.155165 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.205636 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-public-tls-certs\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.205688 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-internal-tls-certs\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.205750 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-config\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.205772 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-httpd-config\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.205796 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-ovndb-tls-certs\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.205890 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-combined-ca-bundle\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.205939 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8gbm\" (UniqueName: \"kubernetes.io/projected/68910b8d-2ec3-4b7c-956c-e3d3518042cf-kube-api-access-n8gbm\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.309992 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-combined-ca-bundle\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.310306 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8gbm\" (UniqueName: \"kubernetes.io/projected/68910b8d-2ec3-4b7c-956c-e3d3518042cf-kube-api-access-n8gbm\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.310447 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-public-tls-certs\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.310536 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-internal-tls-certs\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.310658 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-config\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.310730 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-httpd-config\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.310811 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-ovndb-tls-certs\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.325469 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-httpd-config\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.330703 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-config\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.339306 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8gbm\" (UniqueName: \"kubernetes.io/projected/68910b8d-2ec3-4b7c-956c-e3d3518042cf-kube-api-access-n8gbm\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.343773 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-internal-tls-certs\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.345905 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-public-tls-certs\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.348044 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-ovndb-tls-certs\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.348495 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-combined-ca-bundle\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.420614 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.626265 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2e764852-fe70-4844-a2f2-53e15c45d4c1","Type":"ContainerStarted","Data":"4dcb9afa9312739842512264d7e9318586580a0008020bfa37918a74b0c057c7"} Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.693284 4751 generic.go:334] "Generic (PLEG): container finished" podID="5b32add6-b9f7-4e57-9dc8-ea71dbc40276" containerID="43f0c7937886815d9f7975ac5a567ad9805f039d1f79d20965dca1643fdccbec" exitCode=0 Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.693399 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5db486d6f7-9jq9s" event={"ID":"5b32add6-b9f7-4e57-9dc8-ea71dbc40276","Type":"ContainerDied","Data":"43f0c7937886815d9f7975ac5a567ad9805f039d1f79d20965dca1643fdccbec"} Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.706364 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" event={"ID":"5a85ff98-c5c9-4735-ad9d-3c987976bd2f","Type":"ContainerStarted","Data":"e3064bf78a4cd92d2b24a8bdce3402cc789ce700663ceec250dcf8768aa0ad5c"} Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.706402 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" event={"ID":"5a85ff98-c5c9-4735-ad9d-3c987976bd2f","Type":"ContainerStarted","Data":"d4ad9ad89c73105ea7a484e1db33eb7c6d8564b633625c6640e82ad596737a10"} Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.739633 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.740662 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"81228544-ce67-44f1-b4e0-6a218e154363","Type":"ContainerStarted","Data":"443cd982273ccdaa55a785fc0ffbd0bf36ddc8ddfcc7c39c30424ccabdcf775b"} Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.968696 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 21:38:44 crc kubenswrapper[4751]: I0130 21:38:44.387182 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6989c95c85-6thsl"] Jan 30 21:38:44 crc kubenswrapper[4751]: W0130 21:38:44.421576 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68910b8d_2ec3_4b7c_956c_e3d3518042cf.slice/crio-51f1da2c45874461212f0a926413e7c7a50ba90a39409e74c3824dbcadb6e7a0 WatchSource:0}: Error finding container 51f1da2c45874461212f0a926413e7c7a50ba90a39409e74c3824dbcadb6e7a0: Status 404 returned error can't find the container with id 51f1da2c45874461212f0a926413e7c7a50ba90a39409e74c3824dbcadb6e7a0 Jan 30 21:38:44 crc kubenswrapper[4751]: I0130 21:38:44.471897 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:44 crc kubenswrapper[4751]: I0130 21:38:44.767785 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"81228544-ce67-44f1-b4e0-6a218e154363","Type":"ContainerStarted","Data":"bdd03488d3195a549fc04a34aab5bd9be42fab7815eccaedf690eaba2f311d80"} Jan 30 21:38:44 crc kubenswrapper[4751]: I0130 21:38:44.770434 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2e764852-fe70-4844-a2f2-53e15c45d4c1","Type":"ContainerStarted","Data":"b5e875fe8c945ee695bdcc985187f51e259136b02351ff697df26ea620a452b2"} Jan 30 21:38:44 crc kubenswrapper[4751]: I0130 21:38:44.775778 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6989c95c85-6thsl" event={"ID":"68910b8d-2ec3-4b7c-956c-e3d3518042cf","Type":"ContainerStarted","Data":"3efd88f9d9e3952d7fe52410ecacbd8f777f8f273d5d9e516b1db1ff8cf8f00a"} Jan 30 21:38:44 crc kubenswrapper[4751]: I0130 21:38:44.775822 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6989c95c85-6thsl" event={"ID":"68910b8d-2ec3-4b7c-956c-e3d3518042cf","Type":"ContainerStarted","Data":"51f1da2c45874461212f0a926413e7c7a50ba90a39409e74c3824dbcadb6e7a0"} Jan 30 21:38:44 crc kubenswrapper[4751]: I0130 21:38:44.777577 4751 generic.go:334] "Generic (PLEG): container finished" podID="5a85ff98-c5c9-4735-ad9d-3c987976bd2f" containerID="e3064bf78a4cd92d2b24a8bdce3402cc789ce700663ceec250dcf8768aa0ad5c" exitCode=0 Jan 30 21:38:44 crc kubenswrapper[4751]: I0130 21:38:44.778503 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" event={"ID":"5a85ff98-c5c9-4735-ad9d-3c987976bd2f","Type":"ContainerDied","Data":"e3064bf78a4cd92d2b24a8bdce3402cc789ce700663ceec250dcf8768aa0ad5c"} Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.086108 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5db486d6f7-9jq9s" podUID="5b32add6-b9f7-4e57-9dc8-ea71dbc40276" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.202:9696/\": dial tcp 10.217.0.202:9696: connect: connection refused" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.645505 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.805267 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" event={"ID":"5a85ff98-c5c9-4735-ad9d-3c987976bd2f","Type":"ContainerStarted","Data":"d9b252a19e1756dc14b8604eb4ec0d16757d20c0506507f763599f15997045f8"} Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.806039 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.811634 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"81228544-ce67-44f1-b4e0-6a218e154363","Type":"ContainerStarted","Data":"d9de83cadc3b076ba912dc65301ea8bc1d6d0414a32e18815fa439a9c91d4dfb"} Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.817148 4751 generic.go:334] "Generic (PLEG): container finished" podID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerID="e8f97afcbe8ddffd29b942ae042e5fce17dc8861bb8e88d2fe1fb8cc9e3afcd5" exitCode=0 Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.817379 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36866d1c-b1a0-4d3e-a87f-f5901b053bb5","Type":"ContainerDied","Data":"e8f97afcbe8ddffd29b942ae042e5fce17dc8861bb8e88d2fe1fb8cc9e3afcd5"} Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.817481 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36866d1c-b1a0-4d3e-a87f-f5901b053bb5","Type":"ContainerDied","Data":"f50cd608a1a18449e64efb07caa3e3fd54b436a099efe0495f393f4382e9ab10"} Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.817549 4751 scope.go:117] "RemoveContainer" containerID="cc90a4e11e1d029eeee34f73917760869533380772dddf04e894fab5c930872e" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.817829 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.829140 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-scripts\") pod \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.830693 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-sg-core-conf-yaml\") pod \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.830831 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x98w\" (UniqueName: \"kubernetes.io/projected/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-kube-api-access-6x98w\") pod \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.830958 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-run-httpd\") pod \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.831069 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-log-httpd\") pod \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.831223 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-combined-ca-bundle\") pod \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.831377 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-config-data\") pod \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.833025 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "36866d1c-b1a0-4d3e-a87f-f5901b053bb5" (UID: "36866d1c-b1a0-4d3e-a87f-f5901b053bb5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.833280 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "36866d1c-b1a0-4d3e-a87f-f5901b053bb5" (UID: "36866d1c-b1a0-4d3e-a87f-f5901b053bb5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.840186 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-scripts" (OuterVolumeSpecName: "scripts") pod "36866d1c-b1a0-4d3e-a87f-f5901b053bb5" (UID: "36866d1c-b1a0-4d3e-a87f-f5901b053bb5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.847637 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="2e764852-fe70-4844-a2f2-53e15c45d4c1" containerName="cinder-api-log" containerID="cri-o://b5e875fe8c945ee695bdcc985187f51e259136b02351ff697df26ea620a452b2" gracePeriod=30 Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.848022 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2e764852-fe70-4844-a2f2-53e15c45d4c1","Type":"ContainerStarted","Data":"40612707051f098447bcc08882c9e620dc67b4da793ad4ac978ab6016d413a31"} Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.848218 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.848371 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="2e764852-fe70-4844-a2f2-53e15c45d4c1" containerName="cinder-api" containerID="cri-o://40612707051f098447bcc08882c9e620dc67b4da793ad4ac978ab6016d413a31" gracePeriod=30 Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.853384 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-kube-api-access-6x98w" (OuterVolumeSpecName: "kube-api-access-6x98w") pod "36866d1c-b1a0-4d3e-a87f-f5901b053bb5" (UID: "36866d1c-b1a0-4d3e-a87f-f5901b053bb5"). InnerVolumeSpecName "kube-api-access-6x98w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.855504 4751 generic.go:334] "Generic (PLEG): container finished" podID="5b32add6-b9f7-4e57-9dc8-ea71dbc40276" containerID="ffe40f5beac55335ed5a0e5ca3f2b87505ff8c1d5062b9daa9f817649c1fad14" exitCode=0 Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.855554 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5db486d6f7-9jq9s" event={"ID":"5b32add6-b9f7-4e57-9dc8-ea71dbc40276","Type":"ContainerDied","Data":"ffe40f5beac55335ed5a0e5ca3f2b87505ff8c1d5062b9daa9f817649c1fad14"} Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.857665 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6989c95c85-6thsl" event={"ID":"68910b8d-2ec3-4b7c-956c-e3d3518042cf","Type":"ContainerStarted","Data":"2f1eee4e6896c5696d579c8440f31cec0c40baaefac5c1c58acf0760e5a65cae"} Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.858083 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.867590 4751 scope.go:117] "RemoveContainer" containerID="3927c7e8f046b132aa5530dcb2faa3362f1bfb4c014417fce75dde0984b299c0" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.878665 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" podStartSLOduration=4.878646009 podStartE2EDuration="4.878646009s" podCreationTimestamp="2026-01-30 21:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:45.826528213 +0000 UTC m=+1464.572350862" watchObservedRunningTime="2026-01-30 21:38:45.878646009 +0000 UTC m=+1464.624468658" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.884653 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "36866d1c-b1a0-4d3e-a87f-f5901b053bb5" (UID: "36866d1c-b1a0-4d3e-a87f-f5901b053bb5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.893683 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.869719397 podStartE2EDuration="4.893661171s" podCreationTimestamp="2026-01-30 21:38:41 +0000 UTC" firstStartedPulling="2026-01-30 21:38:42.792287277 +0000 UTC m=+1461.538109926" lastFinishedPulling="2026-01-30 21:38:43.816229051 +0000 UTC m=+1462.562051700" observedRunningTime="2026-01-30 21:38:45.849144378 +0000 UTC m=+1464.594967037" watchObservedRunningTime="2026-01-30 21:38:45.893661171 +0000 UTC m=+1464.639483820" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.919934 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36866d1c-b1a0-4d3e-a87f-f5901b053bb5" (UID: "36866d1c-b1a0-4d3e-a87f-f5901b053bb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.926706 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.926682845 podStartE2EDuration="3.926682845s" podCreationTimestamp="2026-01-30 21:38:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:45.867929202 +0000 UTC m=+1464.613751841" watchObservedRunningTime="2026-01-30 21:38:45.926682845 +0000 UTC m=+1464.672505484" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.939204 4751 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.942076 4751 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.942120 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.942132 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.942141 4751 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.942151 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x98w\" (UniqueName: \"kubernetes.io/projected/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-kube-api-access-6x98w\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.948282 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-config-data" (OuterVolumeSpecName: "config-data") pod "36866d1c-b1a0-4d3e-a87f-f5901b053bb5" (UID: "36866d1c-b1a0-4d3e-a87f-f5901b053bb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.969824 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6989c95c85-6thsl" podStartSLOduration=2.96980211 podStartE2EDuration="2.96980211s" podCreationTimestamp="2026-01-30 21:38:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:45.89027164 +0000 UTC m=+1464.636094279" watchObservedRunningTime="2026-01-30 21:38:45.96980211 +0000 UTC m=+1464.715624759" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.992799 4751 scope.go:117] "RemoveContainer" containerID="e8f97afcbe8ddffd29b942ae042e5fce17dc8861bb8e88d2fe1fb8cc9e3afcd5" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.026084 4751 scope.go:117] "RemoveContainer" containerID="cc90a4e11e1d029eeee34f73917760869533380772dddf04e894fab5c930872e" Jan 30 21:38:46 crc kubenswrapper[4751]: E0130 21:38:46.032529 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc90a4e11e1d029eeee34f73917760869533380772dddf04e894fab5c930872e\": container with ID starting with cc90a4e11e1d029eeee34f73917760869533380772dddf04e894fab5c930872e not found: ID does not exist" containerID="cc90a4e11e1d029eeee34f73917760869533380772dddf04e894fab5c930872e" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.032580 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc90a4e11e1d029eeee34f73917760869533380772dddf04e894fab5c930872e"} err="failed to get container status \"cc90a4e11e1d029eeee34f73917760869533380772dddf04e894fab5c930872e\": rpc error: code = NotFound desc = could not find container \"cc90a4e11e1d029eeee34f73917760869533380772dddf04e894fab5c930872e\": container with ID starting with cc90a4e11e1d029eeee34f73917760869533380772dddf04e894fab5c930872e not found: ID does not exist" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.032605 4751 scope.go:117] "RemoveContainer" containerID="3927c7e8f046b132aa5530dcb2faa3362f1bfb4c014417fce75dde0984b299c0" Jan 30 21:38:46 crc kubenswrapper[4751]: E0130 21:38:46.033397 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3927c7e8f046b132aa5530dcb2faa3362f1bfb4c014417fce75dde0984b299c0\": container with ID starting with 3927c7e8f046b132aa5530dcb2faa3362f1bfb4c014417fce75dde0984b299c0 not found: ID does not exist" containerID="3927c7e8f046b132aa5530dcb2faa3362f1bfb4c014417fce75dde0984b299c0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.033451 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3927c7e8f046b132aa5530dcb2faa3362f1bfb4c014417fce75dde0984b299c0"} err="failed to get container status \"3927c7e8f046b132aa5530dcb2faa3362f1bfb4c014417fce75dde0984b299c0\": rpc error: code = NotFound desc = could not find container \"3927c7e8f046b132aa5530dcb2faa3362f1bfb4c014417fce75dde0984b299c0\": container with ID starting with 3927c7e8f046b132aa5530dcb2faa3362f1bfb4c014417fce75dde0984b299c0 not found: ID does not exist" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.033467 4751 scope.go:117] "RemoveContainer" containerID="e8f97afcbe8ddffd29b942ae042e5fce17dc8861bb8e88d2fe1fb8cc9e3afcd5" Jan 30 21:38:46 crc kubenswrapper[4751]: E0130 21:38:46.035669 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8f97afcbe8ddffd29b942ae042e5fce17dc8861bb8e88d2fe1fb8cc9e3afcd5\": container with ID starting with e8f97afcbe8ddffd29b942ae042e5fce17dc8861bb8e88d2fe1fb8cc9e3afcd5 not found: ID does not exist" containerID="e8f97afcbe8ddffd29b942ae042e5fce17dc8861bb8e88d2fe1fb8cc9e3afcd5" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.035746 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8f97afcbe8ddffd29b942ae042e5fce17dc8861bb8e88d2fe1fb8cc9e3afcd5"} err="failed to get container status \"e8f97afcbe8ddffd29b942ae042e5fce17dc8861bb8e88d2fe1fb8cc9e3afcd5\": rpc error: code = NotFound desc = could not find container \"e8f97afcbe8ddffd29b942ae042e5fce17dc8861bb8e88d2fe1fb8cc9e3afcd5\": container with ID starting with e8f97afcbe8ddffd29b942ae042e5fce17dc8861bb8e88d2fe1fb8cc9e3afcd5 not found: ID does not exist" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.041938 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.049432 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.151996 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-public-tls-certs\") pod \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.152139 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-combined-ca-bundle\") pod \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.152275 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-ovndb-tls-certs\") pod \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.152309 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-config\") pod \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.152393 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz77t\" (UniqueName: \"kubernetes.io/projected/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-kube-api-access-cz77t\") pod \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.152410 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-httpd-config\") pod \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.152444 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-internal-tls-certs\") pod \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.168338 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-kube-api-access-cz77t" (OuterVolumeSpecName: "kube-api-access-cz77t") pod "5b32add6-b9f7-4e57-9dc8-ea71dbc40276" (UID: "5b32add6-b9f7-4e57-9dc8-ea71dbc40276"). InnerVolumeSpecName "kube-api-access-cz77t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.184122 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.208811 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "5b32add6-b9f7-4e57-9dc8-ea71dbc40276" (UID: "5b32add6-b9f7-4e57-9dc8-ea71dbc40276"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.214363 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.228656 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:38:46 crc kubenswrapper[4751]: E0130 21:38:46.229201 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b32add6-b9f7-4e57-9dc8-ea71dbc40276" containerName="neutron-httpd" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.229220 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b32add6-b9f7-4e57-9dc8-ea71dbc40276" containerName="neutron-httpd" Jan 30 21:38:46 crc kubenswrapper[4751]: E0130 21:38:46.229242 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerName="ceilometer-notification-agent" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.229249 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerName="ceilometer-notification-agent" Jan 30 21:38:46 crc kubenswrapper[4751]: E0130 21:38:46.229268 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b32add6-b9f7-4e57-9dc8-ea71dbc40276" containerName="neutron-api" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.229275 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b32add6-b9f7-4e57-9dc8-ea71dbc40276" containerName="neutron-api" Jan 30 21:38:46 crc kubenswrapper[4751]: E0130 21:38:46.229285 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerName="sg-core" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.229290 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerName="sg-core" Jan 30 21:38:46 crc kubenswrapper[4751]: E0130 21:38:46.229317 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerName="proxy-httpd" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.229335 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerName="proxy-httpd" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.229538 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b32add6-b9f7-4e57-9dc8-ea71dbc40276" containerName="neutron-httpd" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.229558 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerName="ceilometer-notification-agent" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.229569 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerName="sg-core" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.229576 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerName="proxy-httpd" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.229588 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b32add6-b9f7-4e57-9dc8-ea71dbc40276" containerName="neutron-api" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.231709 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.234549 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.234586 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.244051 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.260833 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz77t\" (UniqueName: \"kubernetes.io/projected/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-kube-api-access-cz77t\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.261341 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.269621 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-config" (OuterVolumeSpecName: "config") pod "5b32add6-b9f7-4e57-9dc8-ea71dbc40276" (UID: "5b32add6-b9f7-4e57-9dc8-ea71dbc40276"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.296361 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5b32add6-b9f7-4e57-9dc8-ea71dbc40276" (UID: "5b32add6-b9f7-4e57-9dc8-ea71dbc40276"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.298919 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5b32add6-b9f7-4e57-9dc8-ea71dbc40276" (UID: "5b32add6-b9f7-4e57-9dc8-ea71dbc40276"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.302483 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b32add6-b9f7-4e57-9dc8-ea71dbc40276" (UID: "5b32add6-b9f7-4e57-9dc8-ea71dbc40276"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.352528 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "5b32add6-b9f7-4e57-9dc8-ea71dbc40276" (UID: "5b32add6-b9f7-4e57-9dc8-ea71dbc40276"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.363549 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-config-data\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.363618 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtcg6\" (UniqueName: \"kubernetes.io/projected/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-kube-api-access-xtcg6\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.363679 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.363713 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-scripts\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.364030 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-run-httpd\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.364087 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-log-httpd\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.364128 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.364387 4751 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.364412 4751 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.364428 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.364440 4751 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.364453 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.466028 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtcg6\" (UniqueName: \"kubernetes.io/projected/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-kube-api-access-xtcg6\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.466098 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.466138 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-scripts\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.466236 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-run-httpd\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.466255 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-log-httpd\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.466275 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.466320 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-config-data\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.466923 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-run-httpd\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.467001 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-log-httpd\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.469522 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.470390 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-config-data\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.471039 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-scripts\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.471295 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.484644 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtcg6\" (UniqueName: \"kubernetes.io/projected/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-kube-api-access-xtcg6\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.494114 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.871355 4751 generic.go:334] "Generic (PLEG): container finished" podID="2e764852-fe70-4844-a2f2-53e15c45d4c1" containerID="40612707051f098447bcc08882c9e620dc67b4da793ad4ac978ab6016d413a31" exitCode=0 Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.871603 4751 generic.go:334] "Generic (PLEG): container finished" podID="2e764852-fe70-4844-a2f2-53e15c45d4c1" containerID="b5e875fe8c945ee695bdcc985187f51e259136b02351ff697df26ea620a452b2" exitCode=143 Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.871479 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2e764852-fe70-4844-a2f2-53e15c45d4c1","Type":"ContainerDied","Data":"40612707051f098447bcc08882c9e620dc67b4da793ad4ac978ab6016d413a31"} Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.871656 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2e764852-fe70-4844-a2f2-53e15c45d4c1","Type":"ContainerDied","Data":"b5e875fe8c945ee695bdcc985187f51e259136b02351ff697df26ea620a452b2"} Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.873032 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.874193 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5db486d6f7-9jq9s" event={"ID":"5b32add6-b9f7-4e57-9dc8-ea71dbc40276","Type":"ContainerDied","Data":"830f7a58fa0eff11fb7741f803681a1e2d985e3d4ba5a29223b173fc3c4a8925"} Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.874235 4751 scope.go:117] "RemoveContainer" containerID="43f0c7937886815d9f7975ac5a567ad9805f039d1f79d20965dca1643fdccbec" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.874305 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.909599 4751 scope.go:117] "RemoveContainer" containerID="ffe40f5beac55335ed5a0e5ca3f2b87505ff8c1d5062b9daa9f817649c1fad14" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.958027 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5db486d6f7-9jq9s"] Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.970049 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5db486d6f7-9jq9s"] Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.979231 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-scripts\") pod \"2e764852-fe70-4844-a2f2-53e15c45d4c1\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.979384 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e764852-fe70-4844-a2f2-53e15c45d4c1-etc-machine-id\") pod \"2e764852-fe70-4844-a2f2-53e15c45d4c1\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.979466 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-config-data\") pod \"2e764852-fe70-4844-a2f2-53e15c45d4c1\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.979502 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-combined-ca-bundle\") pod \"2e764852-fe70-4844-a2f2-53e15c45d4c1\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.979561 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-config-data-custom\") pod \"2e764852-fe70-4844-a2f2-53e15c45d4c1\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.979575 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e764852-fe70-4844-a2f2-53e15c45d4c1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2e764852-fe70-4844-a2f2-53e15c45d4c1" (UID: "2e764852-fe70-4844-a2f2-53e15c45d4c1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.979596 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vc746\" (UniqueName: \"kubernetes.io/projected/2e764852-fe70-4844-a2f2-53e15c45d4c1-kube-api-access-vc746\") pod \"2e764852-fe70-4844-a2f2-53e15c45d4c1\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.979883 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e764852-fe70-4844-a2f2-53e15c45d4c1-logs\") pod \"2e764852-fe70-4844-a2f2-53e15c45d4c1\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.980995 4751 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e764852-fe70-4844-a2f2-53e15c45d4c1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.986459 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e764852-fe70-4844-a2f2-53e15c45d4c1-logs" (OuterVolumeSpecName: "logs") pod "2e764852-fe70-4844-a2f2-53e15c45d4c1" (UID: "2e764852-fe70-4844-a2f2-53e15c45d4c1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.988822 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2e764852-fe70-4844-a2f2-53e15c45d4c1" (UID: "2e764852-fe70-4844-a2f2-53e15c45d4c1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.989138 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-scripts" (OuterVolumeSpecName: "scripts") pod "2e764852-fe70-4844-a2f2-53e15c45d4c1" (UID: "2e764852-fe70-4844-a2f2-53e15c45d4c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.989468 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e764852-fe70-4844-a2f2-53e15c45d4c1-kube-api-access-vc746" (OuterVolumeSpecName: "kube-api-access-vc746") pod "2e764852-fe70-4844-a2f2-53e15c45d4c1" (UID: "2e764852-fe70-4844-a2f2-53e15c45d4c1"). InnerVolumeSpecName "kube-api-access-vc746". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.025800 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e764852-fe70-4844-a2f2-53e15c45d4c1" (UID: "2e764852-fe70-4844-a2f2-53e15c45d4c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.055885 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-config-data" (OuterVolumeSpecName: "config-data") pod "2e764852-fe70-4844-a2f2-53e15c45d4c1" (UID: "2e764852-fe70-4844-a2f2-53e15c45d4c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.090131 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.090172 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.090231 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.090244 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vc746\" (UniqueName: \"kubernetes.io/projected/2e764852-fe70-4844-a2f2-53e15c45d4c1-kube-api-access-vc746\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.090255 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e764852-fe70-4844-a2f2-53e15c45d4c1-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.090307 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.096859 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.141826 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.405865 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-586n4" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="registry-server" probeResult="failure" output=< Jan 30 21:38:47 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:38:47 crc kubenswrapper[4751]: > Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.906841 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca","Type":"ContainerStarted","Data":"57c96884c7e80a6d477536792bb89b73f1542edc832a38be9d3a266693f347ec"} Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.907285 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca","Type":"ContainerStarted","Data":"566588f45edf254c38a8bd2cb4cecfcf41053da3f96016aee3abcebf59acf4a0"} Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.910542 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2e764852-fe70-4844-a2f2-53e15c45d4c1","Type":"ContainerDied","Data":"4dcb9afa9312739842512264d7e9318586580a0008020bfa37918a74b0c057c7"} Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.910603 4751 scope.go:117] "RemoveContainer" containerID="40612707051f098447bcc08882c9e620dc67b4da793ad4ac978ab6016d413a31" Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.910654 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.954938 4751 scope.go:117] "RemoveContainer" containerID="b5e875fe8c945ee695bdcc985187f51e259136b02351ff697df26ea620a452b2" Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.955904 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.964692 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.024068 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e764852-fe70-4844-a2f2-53e15c45d4c1" path="/var/lib/kubelet/pods/2e764852-fe70-4844-a2f2-53e15c45d4c1/volumes" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.024800 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" path="/var/lib/kubelet/pods/36866d1c-b1a0-4d3e-a87f-f5901b053bb5/volumes" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.025685 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b32add6-b9f7-4e57-9dc8-ea71dbc40276" path="/var/lib/kubelet/pods/5b32add6-b9f7-4e57-9dc8-ea71dbc40276/volumes" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.030994 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 21:38:48 crc kubenswrapper[4751]: E0130 21:38:48.031444 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e764852-fe70-4844-a2f2-53e15c45d4c1" containerName="cinder-api" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.031460 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e764852-fe70-4844-a2f2-53e15c45d4c1" containerName="cinder-api" Jan 30 21:38:48 crc kubenswrapper[4751]: E0130 21:38:48.031486 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e764852-fe70-4844-a2f2-53e15c45d4c1" containerName="cinder-api-log" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.031492 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e764852-fe70-4844-a2f2-53e15c45d4c1" containerName="cinder-api-log" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.031684 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e764852-fe70-4844-a2f2-53e15c45d4c1" containerName="cinder-api-log" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.031715 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e764852-fe70-4844-a2f2-53e15c45d4c1" containerName="cinder-api" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.032953 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.033037 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.035810 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.036141 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.036596 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.221996 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-config-data\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.222485 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvmbb\" (UniqueName: \"kubernetes.io/projected/e741273e-caa0-4a2c-9ed0-6bae195052ce-kube-api-access-hvmbb\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.222554 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e741273e-caa0-4a2c-9ed0-6bae195052ce-logs\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.222767 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-scripts\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.222917 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.223060 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.223098 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.223206 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e741273e-caa0-4a2c-9ed0-6bae195052ce-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.223352 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-config-data-custom\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.325759 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-config-data-custom\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.325953 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-config-data\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.326046 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvmbb\" (UniqueName: \"kubernetes.io/projected/e741273e-caa0-4a2c-9ed0-6bae195052ce-kube-api-access-hvmbb\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.326137 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e741273e-caa0-4a2c-9ed0-6bae195052ce-logs\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.326189 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-scripts\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.326233 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.326306 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.326371 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.326434 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e741273e-caa0-4a2c-9ed0-6bae195052ce-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.326586 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e741273e-caa0-4a2c-9ed0-6bae195052ce-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.327866 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e741273e-caa0-4a2c-9ed0-6bae195052ce-logs\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.345794 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-scripts\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.346350 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.346366 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-config-data\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.347941 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.349196 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.353960 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvmbb\" (UniqueName: \"kubernetes.io/projected/e741273e-caa0-4a2c-9ed0-6bae195052ce-kube-api-access-hvmbb\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.361640 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-config-data-custom\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.669260 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.931527 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca","Type":"ContainerStarted","Data":"645f73160e3f56e7ed531a836f7c9a1561da7bc5259b4db0442d52545c4d2302"} Jan 30 21:38:49 crc kubenswrapper[4751]: I0130 21:38:49.185906 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 21:38:49 crc kubenswrapper[4751]: W0130 21:38:49.188474 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode741273e_caa0_4a2c_9ed0_6bae195052ce.slice/crio-43e5e0d485fdc67c25519d34c1072eb1c974b9add5cd791930f78a2168f65c04 WatchSource:0}: Error finding container 43e5e0d485fdc67c25519d34c1072eb1c974b9add5cd791930f78a2168f65c04: Status 404 returned error can't find the container with id 43e5e0d485fdc67c25519d34c1072eb1c974b9add5cd791930f78a2168f65c04 Jan 30 21:38:49 crc kubenswrapper[4751]: I0130 21:38:49.954655 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca","Type":"ContainerStarted","Data":"b5aac7d6f497e2328bb417b25d63cf92f6851dadf3db5e57dc476e250917965b"} Jan 30 21:38:49 crc kubenswrapper[4751]: I0130 21:38:49.960723 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e741273e-caa0-4a2c-9ed0-6bae195052ce","Type":"ContainerStarted","Data":"4ff08d62915449ad03ba02b3f44c29ebe1137cacf07638b3880f9e961517a3a3"} Jan 30 21:38:49 crc kubenswrapper[4751]: I0130 21:38:49.960756 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e741273e-caa0-4a2c-9ed0-6bae195052ce","Type":"ContainerStarted","Data":"43e5e0d485fdc67c25519d34c1072eb1c974b9add5cd791930f78a2168f65c04"} Jan 30 21:38:50 crc kubenswrapper[4751]: I0130 21:38:50.004093 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:50 crc kubenswrapper[4751]: I0130 21:38:50.088346 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-577c4d4496-28rjx"] Jan 30 21:38:50 crc kubenswrapper[4751]: I0130 21:38:50.093523 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-577c4d4496-28rjx" podUID="49d33f4c-f33a-445b-90ab-795e750ecf2a" containerName="barbican-api-log" containerID="cri-o://d8ef3143c3789b4413d5bf14187839a4bdae12b67fce8d578bbd94065a621c7d" gracePeriod=30 Jan 30 21:38:50 crc kubenswrapper[4751]: I0130 21:38:50.093822 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-577c4d4496-28rjx" podUID="49d33f4c-f33a-445b-90ab-795e750ecf2a" containerName="barbican-api" containerID="cri-o://59d12543bc5db0960672f133622c6e1a94aba99de71c925682294151943b6718" gracePeriod=30 Jan 30 21:38:50 crc kubenswrapper[4751]: I0130 21:38:50.976107 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e741273e-caa0-4a2c-9ed0-6bae195052ce","Type":"ContainerStarted","Data":"5c9ed1c831469f09eeb8ebee04c233b0a02684becb46fc04a7a28aceade51e9f"} Jan 30 21:38:50 crc kubenswrapper[4751]: I0130 21:38:50.976707 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 21:38:50 crc kubenswrapper[4751]: I0130 21:38:50.978561 4751 generic.go:334] "Generic (PLEG): container finished" podID="49d33f4c-f33a-445b-90ab-795e750ecf2a" containerID="d8ef3143c3789b4413d5bf14187839a4bdae12b67fce8d578bbd94065a621c7d" exitCode=143 Jan 30 21:38:50 crc kubenswrapper[4751]: I0130 21:38:50.978591 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-577c4d4496-28rjx" event={"ID":"49d33f4c-f33a-445b-90ab-795e750ecf2a","Type":"ContainerDied","Data":"d8ef3143c3789b4413d5bf14187839a4bdae12b67fce8d578bbd94065a621c7d"} Jan 30 21:38:51 crc kubenswrapper[4751]: I0130 21:38:51.008169 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.008152502 podStartE2EDuration="4.008152502s" podCreationTimestamp="2026-01-30 21:38:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:50.996121709 +0000 UTC m=+1469.741944358" watchObservedRunningTime="2026-01-30 21:38:51.008152502 +0000 UTC m=+1469.753975151" Jan 30 21:38:52 crc kubenswrapper[4751]: I0130 21:38:52.000037 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca","Type":"ContainerStarted","Data":"e9063569ccc33e77085e2b00951a57c6def385f3008e41cdbaf9636a0f5b353f"} Jan 30 21:38:52 crc kubenswrapper[4751]: I0130 21:38:52.047302 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.443408427 podStartE2EDuration="6.047278483s" podCreationTimestamp="2026-01-30 21:38:46 +0000 UTC" firstStartedPulling="2026-01-30 21:38:47.102336732 +0000 UTC m=+1465.848159391" lastFinishedPulling="2026-01-30 21:38:51.706206788 +0000 UTC m=+1470.452029447" observedRunningTime="2026-01-30 21:38:52.036664478 +0000 UTC m=+1470.782487127" watchObservedRunningTime="2026-01-30 21:38:52.047278483 +0000 UTC m=+1470.793101132" Jan 30 21:38:52 crc kubenswrapper[4751]: I0130 21:38:52.231633 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:52 crc kubenswrapper[4751]: I0130 21:38:52.323976 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-j4xm6"] Jan 30 21:38:52 crc kubenswrapper[4751]: I0130 21:38:52.324516 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" podUID="e8978647-a7c1-4e25-b9c9-114227c06b39" containerName="dnsmasq-dns" containerID="cri-o://292f3855d41ee7d4a77843333bb14f65b77c2740ac544185c604a1ac171d5446" gracePeriod=10 Jan 30 21:38:52 crc kubenswrapper[4751]: I0130 21:38:52.434468 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 21:38:52 crc kubenswrapper[4751]: I0130 21:38:52.488604 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.026597 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.052642 4751 generic.go:334] "Generic (PLEG): container finished" podID="e8978647-a7c1-4e25-b9c9-114227c06b39" containerID="292f3855d41ee7d4a77843333bb14f65b77c2740ac544185c604a1ac171d5446" exitCode=0 Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.052892 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="81228544-ce67-44f1-b4e0-6a218e154363" containerName="cinder-scheduler" containerID="cri-o://bdd03488d3195a549fc04a34aab5bd9be42fab7815eccaedf690eaba2f311d80" gracePeriod=30 Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.053293 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.053960 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" event={"ID":"e8978647-a7c1-4e25-b9c9-114227c06b39","Type":"ContainerDied","Data":"292f3855d41ee7d4a77843333bb14f65b77c2740ac544185c604a1ac171d5446"} Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.054001 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" event={"ID":"e8978647-a7c1-4e25-b9c9-114227c06b39","Type":"ContainerDied","Data":"2f99bc73fbfc7d9583563a00884a12bf270505fadf2b5daa26dca51bca6913ea"} Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.054021 4751 scope.go:117] "RemoveContainer" containerID="292f3855d41ee7d4a77843333bb14f65b77c2740ac544185c604a1ac171d5446" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.054469 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="81228544-ce67-44f1-b4e0-6a218e154363" containerName="probe" containerID="cri-o://d9de83cadc3b076ba912dc65301ea8bc1d6d0414a32e18815fa439a9c91d4dfb" gracePeriod=30 Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.055006 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.112204 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-ovsdbserver-nb\") pod \"e8978647-a7c1-4e25-b9c9-114227c06b39\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.112369 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-ovsdbserver-sb\") pod \"e8978647-a7c1-4e25-b9c9-114227c06b39\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.112414 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddsbn\" (UniqueName: \"kubernetes.io/projected/e8978647-a7c1-4e25-b9c9-114227c06b39-kube-api-access-ddsbn\") pod \"e8978647-a7c1-4e25-b9c9-114227c06b39\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.112453 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-config\") pod \"e8978647-a7c1-4e25-b9c9-114227c06b39\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.112479 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-dns-svc\") pod \"e8978647-a7c1-4e25-b9c9-114227c06b39\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.112537 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-dns-swift-storage-0\") pod \"e8978647-a7c1-4e25-b9c9-114227c06b39\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.120949 4751 scope.go:117] "RemoveContainer" containerID="e2f7dd40889c591749a184c2b14e879cac65ab57554ef2c3f648955df0d136e1" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.137898 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8978647-a7c1-4e25-b9c9-114227c06b39-kube-api-access-ddsbn" (OuterVolumeSpecName: "kube-api-access-ddsbn") pod "e8978647-a7c1-4e25-b9c9-114227c06b39" (UID: "e8978647-a7c1-4e25-b9c9-114227c06b39"). InnerVolumeSpecName "kube-api-access-ddsbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.217038 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddsbn\" (UniqueName: \"kubernetes.io/projected/e8978647-a7c1-4e25-b9c9-114227c06b39-kube-api-access-ddsbn\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.246591 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-config" (OuterVolumeSpecName: "config") pod "e8978647-a7c1-4e25-b9c9-114227c06b39" (UID: "e8978647-a7c1-4e25-b9c9-114227c06b39"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.272617 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e8978647-a7c1-4e25-b9c9-114227c06b39" (UID: "e8978647-a7c1-4e25-b9c9-114227c06b39"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.319987 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.320022 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.322774 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e8978647-a7c1-4e25-b9c9-114227c06b39" (UID: "e8978647-a7c1-4e25-b9c9-114227c06b39"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.334450 4751 scope.go:117] "RemoveContainer" containerID="292f3855d41ee7d4a77843333bb14f65b77c2740ac544185c604a1ac171d5446" Jan 30 21:38:53 crc kubenswrapper[4751]: E0130 21:38:53.336765 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"292f3855d41ee7d4a77843333bb14f65b77c2740ac544185c604a1ac171d5446\": container with ID starting with 292f3855d41ee7d4a77843333bb14f65b77c2740ac544185c604a1ac171d5446 not found: ID does not exist" containerID="292f3855d41ee7d4a77843333bb14f65b77c2740ac544185c604a1ac171d5446" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.336799 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"292f3855d41ee7d4a77843333bb14f65b77c2740ac544185c604a1ac171d5446"} err="failed to get container status \"292f3855d41ee7d4a77843333bb14f65b77c2740ac544185c604a1ac171d5446\": rpc error: code = NotFound desc = could not find container \"292f3855d41ee7d4a77843333bb14f65b77c2740ac544185c604a1ac171d5446\": container with ID starting with 292f3855d41ee7d4a77843333bb14f65b77c2740ac544185c604a1ac171d5446 not found: ID does not exist" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.336818 4751 scope.go:117] "RemoveContainer" containerID="e2f7dd40889c591749a184c2b14e879cac65ab57554ef2c3f648955df0d136e1" Jan 30 21:38:53 crc kubenswrapper[4751]: E0130 21:38:53.337590 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2f7dd40889c591749a184c2b14e879cac65ab57554ef2c3f648955df0d136e1\": container with ID starting with e2f7dd40889c591749a184c2b14e879cac65ab57554ef2c3f648955df0d136e1 not found: ID does not exist" containerID="e2f7dd40889c591749a184c2b14e879cac65ab57554ef2c3f648955df0d136e1" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.337649 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2f7dd40889c591749a184c2b14e879cac65ab57554ef2c3f648955df0d136e1"} err="failed to get container status \"e2f7dd40889c591749a184c2b14e879cac65ab57554ef2c3f648955df0d136e1\": rpc error: code = NotFound desc = could not find container \"e2f7dd40889c591749a184c2b14e879cac65ab57554ef2c3f648955df0d136e1\": container with ID starting with e2f7dd40889c591749a184c2b14e879cac65ab57554ef2c3f648955df0d136e1 not found: ID does not exist" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.339736 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e8978647-a7c1-4e25-b9c9-114227c06b39" (UID: "e8978647-a7c1-4e25-b9c9-114227c06b39"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.341843 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e8978647-a7c1-4e25-b9c9-114227c06b39" (UID: "e8978647-a7c1-4e25-b9c9-114227c06b39"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.425014 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.425068 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.425081 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.427879 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-j4xm6"] Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.432595 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-577c4d4496-28rjx" podUID="49d33f4c-f33a-445b-90ab-795e750ecf2a" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.210:9311/healthcheck\": read tcp 10.217.0.2:45748->10.217.0.210:9311: read: connection reset by peer" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.432607 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-577c4d4496-28rjx" podUID="49d33f4c-f33a-445b-90ab-795e750ecf2a" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.210:9311/healthcheck\": read tcp 10.217.0.2:45758->10.217.0.210:9311: read: connection reset by peer" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.442930 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-j4xm6"] Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.874616 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.995800 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8978647-a7c1-4e25-b9c9-114227c06b39" path="/var/lib/kubelet/pods/e8978647-a7c1-4e25-b9c9-114227c06b39/volumes" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.036356 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtm4b\" (UniqueName: \"kubernetes.io/projected/49d33f4c-f33a-445b-90ab-795e750ecf2a-kube-api-access-mtm4b\") pod \"49d33f4c-f33a-445b-90ab-795e750ecf2a\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.036417 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49d33f4c-f33a-445b-90ab-795e750ecf2a-logs\") pod \"49d33f4c-f33a-445b-90ab-795e750ecf2a\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.036658 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-config-data\") pod \"49d33f4c-f33a-445b-90ab-795e750ecf2a\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.037010 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49d33f4c-f33a-445b-90ab-795e750ecf2a-logs" (OuterVolumeSpecName: "logs") pod "49d33f4c-f33a-445b-90ab-795e750ecf2a" (UID: "49d33f4c-f33a-445b-90ab-795e750ecf2a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.037547 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-config-data-custom\") pod \"49d33f4c-f33a-445b-90ab-795e750ecf2a\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.037751 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-combined-ca-bundle\") pod \"49d33f4c-f33a-445b-90ab-795e750ecf2a\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.038866 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49d33f4c-f33a-445b-90ab-795e750ecf2a-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.041217 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "49d33f4c-f33a-445b-90ab-795e750ecf2a" (UID: "49d33f4c-f33a-445b-90ab-795e750ecf2a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.042847 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49d33f4c-f33a-445b-90ab-795e750ecf2a-kube-api-access-mtm4b" (OuterVolumeSpecName: "kube-api-access-mtm4b") pod "49d33f4c-f33a-445b-90ab-795e750ecf2a" (UID: "49d33f4c-f33a-445b-90ab-795e750ecf2a"). InnerVolumeSpecName "kube-api-access-mtm4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.064581 4751 generic.go:334] "Generic (PLEG): container finished" podID="49d33f4c-f33a-445b-90ab-795e750ecf2a" containerID="59d12543bc5db0960672f133622c6e1a94aba99de71c925682294151943b6718" exitCode=0 Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.064633 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.064672 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-577c4d4496-28rjx" event={"ID":"49d33f4c-f33a-445b-90ab-795e750ecf2a","Type":"ContainerDied","Data":"59d12543bc5db0960672f133622c6e1a94aba99de71c925682294151943b6718"} Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.064706 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-577c4d4496-28rjx" event={"ID":"49d33f4c-f33a-445b-90ab-795e750ecf2a","Type":"ContainerDied","Data":"90765e22c210e8d4dca2167b620180485ee5c8cdff299ab3f9aa70131e4301fe"} Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.064726 4751 scope.go:117] "RemoveContainer" containerID="59d12543bc5db0960672f133622c6e1a94aba99de71c925682294151943b6718" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.073400 4751 generic.go:334] "Generic (PLEG): container finished" podID="81228544-ce67-44f1-b4e0-6a218e154363" containerID="d9de83cadc3b076ba912dc65301ea8bc1d6d0414a32e18815fa439a9c91d4dfb" exitCode=0 Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.073801 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"81228544-ce67-44f1-b4e0-6a218e154363","Type":"ContainerDied","Data":"d9de83cadc3b076ba912dc65301ea8bc1d6d0414a32e18815fa439a9c91d4dfb"} Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.078699 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49d33f4c-f33a-445b-90ab-795e750ecf2a" (UID: "49d33f4c-f33a-445b-90ab-795e750ecf2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.106526 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-config-data" (OuterVolumeSpecName: "config-data") pod "49d33f4c-f33a-445b-90ab-795e750ecf2a" (UID: "49d33f4c-f33a-445b-90ab-795e750ecf2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.126749 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.126791 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.141074 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.141107 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.141117 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtm4b\" (UniqueName: \"kubernetes.io/projected/49d33f4c-f33a-445b-90ab-795e750ecf2a-kube-api-access-mtm4b\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.141127 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.212918 4751 scope.go:117] "RemoveContainer" containerID="d8ef3143c3789b4413d5bf14187839a4bdae12b67fce8d578bbd94065a621c7d" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.258043 4751 scope.go:117] "RemoveContainer" containerID="59d12543bc5db0960672f133622c6e1a94aba99de71c925682294151943b6718" Jan 30 21:38:54 crc kubenswrapper[4751]: E0130 21:38:54.258509 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59d12543bc5db0960672f133622c6e1a94aba99de71c925682294151943b6718\": container with ID starting with 59d12543bc5db0960672f133622c6e1a94aba99de71c925682294151943b6718 not found: ID does not exist" containerID="59d12543bc5db0960672f133622c6e1a94aba99de71c925682294151943b6718" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.258554 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59d12543bc5db0960672f133622c6e1a94aba99de71c925682294151943b6718"} err="failed to get container status \"59d12543bc5db0960672f133622c6e1a94aba99de71c925682294151943b6718\": rpc error: code = NotFound desc = could not find container \"59d12543bc5db0960672f133622c6e1a94aba99de71c925682294151943b6718\": container with ID starting with 59d12543bc5db0960672f133622c6e1a94aba99de71c925682294151943b6718 not found: ID does not exist" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.258579 4751 scope.go:117] "RemoveContainer" containerID="d8ef3143c3789b4413d5bf14187839a4bdae12b67fce8d578bbd94065a621c7d" Jan 30 21:38:54 crc kubenswrapper[4751]: E0130 21:38:54.258941 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8ef3143c3789b4413d5bf14187839a4bdae12b67fce8d578bbd94065a621c7d\": container with ID starting with d8ef3143c3789b4413d5bf14187839a4bdae12b67fce8d578bbd94065a621c7d not found: ID does not exist" containerID="d8ef3143c3789b4413d5bf14187839a4bdae12b67fce8d578bbd94065a621c7d" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.258976 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8ef3143c3789b4413d5bf14187839a4bdae12b67fce8d578bbd94065a621c7d"} err="failed to get container status \"d8ef3143c3789b4413d5bf14187839a4bdae12b67fce8d578bbd94065a621c7d\": rpc error: code = NotFound desc = could not find container \"d8ef3143c3789b4413d5bf14187839a4bdae12b67fce8d578bbd94065a621c7d\": container with ID starting with d8ef3143c3789b4413d5bf14187839a4bdae12b67fce8d578bbd94065a621c7d not found: ID does not exist" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.402689 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-577c4d4496-28rjx"] Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.410679 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-577c4d4496-28rjx"] Jan 30 21:38:55 crc kubenswrapper[4751]: I0130 21:38:55.992513 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49d33f4c-f33a-445b-90ab-795e750ecf2a" path="/var/lib/kubelet/pods/49d33f4c-f33a-445b-90ab-795e750ecf2a/volumes" Jan 30 21:38:56 crc kubenswrapper[4751]: I0130 21:38:56.277755 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:56 crc kubenswrapper[4751]: I0130 21:38:56.280476 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:57 crc kubenswrapper[4751]: I0130 21:38:57.415136 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-586n4" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="registry-server" probeResult="failure" output=< Jan 30 21:38:57 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:38:57 crc kubenswrapper[4751]: > Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.118734 4751 generic.go:334] "Generic (PLEG): container finished" podID="81228544-ce67-44f1-b4e0-6a218e154363" containerID="bdd03488d3195a549fc04a34aab5bd9be42fab7815eccaedf690eaba2f311d80" exitCode=0 Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.118782 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"81228544-ce67-44f1-b4e0-6a218e154363","Type":"ContainerDied","Data":"bdd03488d3195a549fc04a34aab5bd9be42fab7815eccaedf690eaba2f311d80"} Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.119059 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"81228544-ce67-44f1-b4e0-6a218e154363","Type":"ContainerDied","Data":"443cd982273ccdaa55a785fc0ffbd0bf36ddc8ddfcc7c39c30424ccabdcf775b"} Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.119073 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="443cd982273ccdaa55a785fc0ffbd0bf36ddc8ddfcc7c39c30424ccabdcf775b" Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.208624 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.345281 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/81228544-ce67-44f1-b4e0-6a218e154363-etc-machine-id\") pod \"81228544-ce67-44f1-b4e0-6a218e154363\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.345461 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-combined-ca-bundle\") pod \"81228544-ce67-44f1-b4e0-6a218e154363\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.345497 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-scripts\") pod \"81228544-ce67-44f1-b4e0-6a218e154363\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.345602 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-config-data-custom\") pod \"81228544-ce67-44f1-b4e0-6a218e154363\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.345662 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-config-data\") pod \"81228544-ce67-44f1-b4e0-6a218e154363\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.345691 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbtqt\" (UniqueName: \"kubernetes.io/projected/81228544-ce67-44f1-b4e0-6a218e154363-kube-api-access-bbtqt\") pod \"81228544-ce67-44f1-b4e0-6a218e154363\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.346067 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81228544-ce67-44f1-b4e0-6a218e154363-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "81228544-ce67-44f1-b4e0-6a218e154363" (UID: "81228544-ce67-44f1-b4e0-6a218e154363"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.346715 4751 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/81228544-ce67-44f1-b4e0-6a218e154363-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.351234 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "81228544-ce67-44f1-b4e0-6a218e154363" (UID: "81228544-ce67-44f1-b4e0-6a218e154363"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.352567 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-scripts" (OuterVolumeSpecName: "scripts") pod "81228544-ce67-44f1-b4e0-6a218e154363" (UID: "81228544-ce67-44f1-b4e0-6a218e154363"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.373718 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81228544-ce67-44f1-b4e0-6a218e154363-kube-api-access-bbtqt" (OuterVolumeSpecName: "kube-api-access-bbtqt") pod "81228544-ce67-44f1-b4e0-6a218e154363" (UID: "81228544-ce67-44f1-b4e0-6a218e154363"). InnerVolumeSpecName "kube-api-access-bbtqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.418488 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81228544-ce67-44f1-b4e0-6a218e154363" (UID: "81228544-ce67-44f1-b4e0-6a218e154363"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.449043 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.449074 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbtqt\" (UniqueName: \"kubernetes.io/projected/81228544-ce67-44f1-b4e0-6a218e154363-kube-api-access-bbtqt\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.449085 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.449093 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.478586 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-config-data" (OuterVolumeSpecName: "config-data") pod "81228544-ce67-44f1-b4e0-6a218e154363" (UID: "81228544-ce67-44f1-b4e0-6a218e154363"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.550793 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.128913 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.172679 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.189176 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.192810 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.219449 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 21:38:59 crc kubenswrapper[4751]: E0130 21:38:59.220014 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81228544-ce67-44f1-b4e0-6a218e154363" containerName="cinder-scheduler" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.220033 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="81228544-ce67-44f1-b4e0-6a218e154363" containerName="cinder-scheduler" Jan 30 21:38:59 crc kubenswrapper[4751]: E0130 21:38:59.220054 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8978647-a7c1-4e25-b9c9-114227c06b39" containerName="init" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.220060 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8978647-a7c1-4e25-b9c9-114227c06b39" containerName="init" Jan 30 21:38:59 crc kubenswrapper[4751]: E0130 21:38:59.220069 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d33f4c-f33a-445b-90ab-795e750ecf2a" containerName="barbican-api" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.220075 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d33f4c-f33a-445b-90ab-795e750ecf2a" containerName="barbican-api" Jan 30 21:38:59 crc kubenswrapper[4751]: E0130 21:38:59.220094 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d33f4c-f33a-445b-90ab-795e750ecf2a" containerName="barbican-api-log" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.220099 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d33f4c-f33a-445b-90ab-795e750ecf2a" containerName="barbican-api-log" Jan 30 21:38:59 crc kubenswrapper[4751]: E0130 21:38:59.220121 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8978647-a7c1-4e25-b9c9-114227c06b39" containerName="dnsmasq-dns" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.220127 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8978647-a7c1-4e25-b9c9-114227c06b39" containerName="dnsmasq-dns" Jan 30 21:38:59 crc kubenswrapper[4751]: E0130 21:38:59.220135 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81228544-ce67-44f1-b4e0-6a218e154363" containerName="probe" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.220142 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="81228544-ce67-44f1-b4e0-6a218e154363" containerName="probe" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.220385 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="81228544-ce67-44f1-b4e0-6a218e154363" containerName="cinder-scheduler" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.220403 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="81228544-ce67-44f1-b4e0-6a218e154363" containerName="probe" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.220413 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="49d33f4c-f33a-445b-90ab-795e750ecf2a" containerName="barbican-api-log" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.220422 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8978647-a7c1-4e25-b9c9-114227c06b39" containerName="dnsmasq-dns" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.220434 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="49d33f4c-f33a-445b-90ab-795e750ecf2a" containerName="barbican-api" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.221632 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.229572 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.230681 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.370787 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-config-data\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.370875 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.370892 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.370922 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.371064 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-scripts\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.371093 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5lvg\" (UniqueName: \"kubernetes.io/projected/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-kube-api-access-d5lvg\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.473392 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-config-data\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.473472 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.473493 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.473518 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.473607 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-scripts\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.473627 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5lvg\" (UniqueName: \"kubernetes.io/projected/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-kube-api-access-d5lvg\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.475042 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.478724 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.482757 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.484092 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-config-data\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.487799 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-scripts\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.490897 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5lvg\" (UniqueName: \"kubernetes.io/projected/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-kube-api-access-d5lvg\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.558433 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.710904 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.713013 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.716651 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-cb7bq" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.717286 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.718157 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.750070 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.895501 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4d8da9bd-aba2-45b4-acc9-7fb085937e02-openstack-config-secret\") pod \"openstackclient\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " pod="openstack/openstackclient" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.895680 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d8da9bd-aba2-45b4-acc9-7fb085937e02-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " pod="openstack/openstackclient" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.895718 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8zl2\" (UniqueName: \"kubernetes.io/projected/4d8da9bd-aba2-45b4-acc9-7fb085937e02-kube-api-access-k8zl2\") pod \"openstackclient\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " pod="openstack/openstackclient" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.895788 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4d8da9bd-aba2-45b4-acc9-7fb085937e02-openstack-config\") pod \"openstackclient\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " pod="openstack/openstackclient" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.998762 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d8da9bd-aba2-45b4-acc9-7fb085937e02-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " pod="openstack/openstackclient" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.998817 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8zl2\" (UniqueName: \"kubernetes.io/projected/4d8da9bd-aba2-45b4-acc9-7fb085937e02-kube-api-access-k8zl2\") pod \"openstackclient\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " pod="openstack/openstackclient" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.998882 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4d8da9bd-aba2-45b4-acc9-7fb085937e02-openstack-config\") pod \"openstackclient\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " pod="openstack/openstackclient" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.999000 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4d8da9bd-aba2-45b4-acc9-7fb085937e02-openstack-config-secret\") pod \"openstackclient\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.000482 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4d8da9bd-aba2-45b4-acc9-7fb085937e02-openstack-config\") pod \"openstackclient\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.006375 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d8da9bd-aba2-45b4-acc9-7fb085937e02-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.024925 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8zl2\" (UniqueName: \"kubernetes.io/projected/4d8da9bd-aba2-45b4-acc9-7fb085937e02-kube-api-access-k8zl2\") pod \"openstackclient\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.042623 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4d8da9bd-aba2-45b4-acc9-7fb085937e02-openstack-config-secret\") pod \"openstackclient\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.059916 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81228544-ce67-44f1-b4e0-6a218e154363" path="/var/lib/kubelet/pods/81228544-ce67-44f1-b4e0-6a218e154363/volumes" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.061247 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.066755 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.069079 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.083684 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.111866 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.120563 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: W0130 21:39:00.150119 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod927e4c2b_4fb5_4ccb_adeb_8847ea0c4c56.slice/crio-d99060427762460061aade093f19a0e2bb89cdcb1a82789a27c6ea17076cecba WatchSource:0}: Error finding container d99060427762460061aade093f19a0e2bb89cdcb1a82789a27c6ea17076cecba: Status 404 returned error can't find the container with id d99060427762460061aade093f19a0e2bb89cdcb1a82789a27c6ea17076cecba Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.153163 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.198993 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.205402 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/af93872a-62a1-407c-9932-2afb4313f457-openstack-config\") pod \"openstackclient\" (UID: \"af93872a-62a1-407c-9932-2afb4313f457\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.205515 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/af93872a-62a1-407c-9932-2afb4313f457-openstack-config-secret\") pod \"openstackclient\" (UID: \"af93872a-62a1-407c-9932-2afb4313f457\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.205570 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46mvp\" (UniqueName: \"kubernetes.io/projected/af93872a-62a1-407c-9932-2afb4313f457-kube-api-access-46mvp\") pod \"openstackclient\" (UID: \"af93872a-62a1-407c-9932-2afb4313f457\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.205590 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af93872a-62a1-407c-9932-2afb4313f457-combined-ca-bundle\") pod \"openstackclient\" (UID: \"af93872a-62a1-407c-9932-2afb4313f457\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.242171 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:39:00 crc kubenswrapper[4751]: E0130 21:39:00.317936 4751 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 21:39:00 crc kubenswrapper[4751]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_4d8da9bd-aba2-45b4-acc9-7fb085937e02_0(b887095ea0c872474f9d78a2358219c9876e6dd480dcbfe55a57315616eb56ca): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b887095ea0c872474f9d78a2358219c9876e6dd480dcbfe55a57315616eb56ca" Netns:"/var/run/netns/7d09ce61-bb6f-4487-bb52-c83ca4231ff9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=b887095ea0c872474f9d78a2358219c9876e6dd480dcbfe55a57315616eb56ca;K8S_POD_UID=4d8da9bd-aba2-45b4-acc9-7fb085937e02" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/4d8da9bd-aba2-45b4-acc9-7fb085937e02]: expected pod UID "4d8da9bd-aba2-45b4-acc9-7fb085937e02" but got "af93872a-62a1-407c-9932-2afb4313f457" from Kube API Jan 30 21:39:00 crc kubenswrapper[4751]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 21:39:00 crc kubenswrapper[4751]: > Jan 30 21:39:00 crc kubenswrapper[4751]: E0130 21:39:00.317999 4751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 21:39:00 crc kubenswrapper[4751]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_4d8da9bd-aba2-45b4-acc9-7fb085937e02_0(b887095ea0c872474f9d78a2358219c9876e6dd480dcbfe55a57315616eb56ca): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b887095ea0c872474f9d78a2358219c9876e6dd480dcbfe55a57315616eb56ca" Netns:"/var/run/netns/7d09ce61-bb6f-4487-bb52-c83ca4231ff9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=b887095ea0c872474f9d78a2358219c9876e6dd480dcbfe55a57315616eb56ca;K8S_POD_UID=4d8da9bd-aba2-45b4-acc9-7fb085937e02" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/4d8da9bd-aba2-45b4-acc9-7fb085937e02]: expected pod UID "4d8da9bd-aba2-45b4-acc9-7fb085937e02" but got "af93872a-62a1-407c-9932-2afb4313f457" from Kube API Jan 30 21:39:00 crc kubenswrapper[4751]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 21:39:00 crc kubenswrapper[4751]: > pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.318394 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/af93872a-62a1-407c-9932-2afb4313f457-openstack-config\") pod \"openstackclient\" (UID: \"af93872a-62a1-407c-9932-2afb4313f457\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.318550 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/af93872a-62a1-407c-9932-2afb4313f457-openstack-config-secret\") pod \"openstackclient\" (UID: \"af93872a-62a1-407c-9932-2afb4313f457\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.318626 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46mvp\" (UniqueName: \"kubernetes.io/projected/af93872a-62a1-407c-9932-2afb4313f457-kube-api-access-46mvp\") pod \"openstackclient\" (UID: \"af93872a-62a1-407c-9932-2afb4313f457\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.318660 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af93872a-62a1-407c-9932-2afb4313f457-combined-ca-bundle\") pod \"openstackclient\" (UID: \"af93872a-62a1-407c-9932-2afb4313f457\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.319294 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/af93872a-62a1-407c-9932-2afb4313f457-openstack-config\") pod \"openstackclient\" (UID: \"af93872a-62a1-407c-9932-2afb4313f457\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.330185 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5486cc9958-dvfn2"] Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.330744 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/af93872a-62a1-407c-9932-2afb4313f457-openstack-config-secret\") pod \"openstackclient\" (UID: \"af93872a-62a1-407c-9932-2afb4313f457\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.330896 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5486cc9958-dvfn2" podUID="5089359d-290c-4b07-80e4-0c4c73ffa8cd" containerName="placement-log" containerID="cri-o://5f448114a9e068f8f50004034c9e2ada11f4f45525f6ffe5d1ccd7a3167a8e23" gracePeriod=30 Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.331098 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5486cc9958-dvfn2" podUID="5089359d-290c-4b07-80e4-0c4c73ffa8cd" containerName="placement-api" containerID="cri-o://744360fb184e7a689ab217ce4f6709b0ff7ab37b1bf6dc2a42f1d4e37e6d2d8f" gracePeriod=30 Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.339124 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af93872a-62a1-407c-9932-2afb4313f457-combined-ca-bundle\") pod \"openstackclient\" (UID: \"af93872a-62a1-407c-9932-2afb4313f457\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.378672 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46mvp\" (UniqueName: \"kubernetes.io/projected/af93872a-62a1-407c-9932-2afb4313f457-kube-api-access-46mvp\") pod \"openstackclient\" (UID: \"af93872a-62a1-407c-9932-2afb4313f457\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.467251 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.162905 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.187590 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56","Type":"ContainerStarted","Data":"d99060427762460061aade093f19a0e2bb89cdcb1a82789a27c6ea17076cecba"} Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.191293 4751 generic.go:334] "Generic (PLEG): container finished" podID="5089359d-290c-4b07-80e4-0c4c73ffa8cd" containerID="5f448114a9e068f8f50004034c9e2ada11f4f45525f6ffe5d1ccd7a3167a8e23" exitCode=143 Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.191393 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.192112 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5486cc9958-dvfn2" event={"ID":"5089359d-290c-4b07-80e4-0c4c73ffa8cd","Type":"ContainerDied","Data":"5f448114a9e068f8f50004034c9e2ada11f4f45525f6ffe5d1ccd7a3167a8e23"} Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.197739 4751 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="4d8da9bd-aba2-45b4-acc9-7fb085937e02" podUID="af93872a-62a1-407c-9932-2afb4313f457" Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.201991 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 21:39:01 crc kubenswrapper[4751]: W0130 21:39:01.249468 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf93872a_62a1_407c_9932_2afb4313f457.slice/crio-739a2a72ce761a3f373f6256f8497584526f26f3772ad8fb56df46573f893e74 WatchSource:0}: Error finding container 739a2a72ce761a3f373f6256f8497584526f26f3772ad8fb56df46573f893e74: Status 404 returned error can't find the container with id 739a2a72ce761a3f373f6256f8497584526f26f3772ad8fb56df46573f893e74 Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.372512 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8zl2\" (UniqueName: \"kubernetes.io/projected/4d8da9bd-aba2-45b4-acc9-7fb085937e02-kube-api-access-k8zl2\") pod \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.372958 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4d8da9bd-aba2-45b4-acc9-7fb085937e02-openstack-config-secret\") pod \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.373014 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4d8da9bd-aba2-45b4-acc9-7fb085937e02-openstack-config\") pod \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.373183 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d8da9bd-aba2-45b4-acc9-7fb085937e02-combined-ca-bundle\") pod \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.381523 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d8da9bd-aba2-45b4-acc9-7fb085937e02-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d8da9bd-aba2-45b4-acc9-7fb085937e02" (UID: "4d8da9bd-aba2-45b4-acc9-7fb085937e02"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.381578 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d8da9bd-aba2-45b4-acc9-7fb085937e02-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "4d8da9bd-aba2-45b4-acc9-7fb085937e02" (UID: "4d8da9bd-aba2-45b4-acc9-7fb085937e02"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.381673 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d8da9bd-aba2-45b4-acc9-7fb085937e02-kube-api-access-k8zl2" (OuterVolumeSpecName: "kube-api-access-k8zl2") pod "4d8da9bd-aba2-45b4-acc9-7fb085937e02" (UID: "4d8da9bd-aba2-45b4-acc9-7fb085937e02"). InnerVolumeSpecName "kube-api-access-k8zl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.422792 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d8da9bd-aba2-45b4-acc9-7fb085937e02-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "4d8da9bd-aba2-45b4-acc9-7fb085937e02" (UID: "4d8da9bd-aba2-45b4-acc9-7fb085937e02"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.475255 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d8da9bd-aba2-45b4-acc9-7fb085937e02-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.475288 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8zl2\" (UniqueName: \"kubernetes.io/projected/4d8da9bd-aba2-45b4-acc9-7fb085937e02-kube-api-access-k8zl2\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.475299 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4d8da9bd-aba2-45b4-acc9-7fb085937e02-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.475306 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4d8da9bd-aba2-45b4-acc9-7fb085937e02-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.986981 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d8da9bd-aba2-45b4-acc9-7fb085937e02" path="/var/lib/kubelet/pods/4d8da9bd-aba2-45b4-acc9-7fb085937e02/volumes" Jan 30 21:39:02 crc kubenswrapper[4751]: I0130 21:39:02.203395 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"af93872a-62a1-407c-9932-2afb4313f457","Type":"ContainerStarted","Data":"739a2a72ce761a3f373f6256f8497584526f26f3772ad8fb56df46573f893e74"} Jan 30 21:39:02 crc kubenswrapper[4751]: I0130 21:39:02.204483 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 21:39:02 crc kubenswrapper[4751]: I0130 21:39:02.204564 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56","Type":"ContainerStarted","Data":"be4296f6b217dd2b689af14c4d2482756a70e0c8c65c70d369dec116324e23e7"} Jan 30 21:39:02 crc kubenswrapper[4751]: I0130 21:39:02.264516 4751 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="4d8da9bd-aba2-45b4-acc9-7fb085937e02" podUID="af93872a-62a1-407c-9932-2afb4313f457" Jan 30 21:39:02 crc kubenswrapper[4751]: I0130 21:39:02.678528 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="e741273e-caa0-4a2c-9ed0-6bae195052ce" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.218:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 21:39:03 crc kubenswrapper[4751]: I0130 21:39:03.216084 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56","Type":"ContainerStarted","Data":"f88b06701c46651952fd41f6764053d26180a5f35e312a74895bb2817e3380bc"} Jan 30 21:39:03 crc kubenswrapper[4751]: I0130 21:39:03.329548 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 21:39:03 crc kubenswrapper[4751]: I0130 21:39:03.361386 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.361362185 podStartE2EDuration="4.361362185s" podCreationTimestamp="2026-01-30 21:38:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:39:03.247251799 +0000 UTC m=+1481.993074448" watchObservedRunningTime="2026-01-30 21:39:03.361362185 +0000 UTC m=+1482.107184834" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.241182 4751 generic.go:334] "Generic (PLEG): container finished" podID="5089359d-290c-4b07-80e4-0c4c73ffa8cd" containerID="744360fb184e7a689ab217ce4f6709b0ff7ab37b1bf6dc2a42f1d4e37e6d2d8f" exitCode=0 Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.242755 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5486cc9958-dvfn2" event={"ID":"5089359d-290c-4b07-80e4-0c4c73ffa8cd","Type":"ContainerDied","Data":"744360fb184e7a689ab217ce4f6709b0ff7ab37b1bf6dc2a42f1d4e37e6d2d8f"} Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.401443 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.558846 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.567247 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-internal-tls-certs\") pod \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.567388 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5089359d-290c-4b07-80e4-0c4c73ffa8cd-logs\") pod \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.567497 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-combined-ca-bundle\") pod \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.567562 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-public-tls-certs\") pod \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.567644 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-scripts\") pod \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.567749 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-config-data\") pod \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.567830 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhxpw\" (UniqueName: \"kubernetes.io/projected/5089359d-290c-4b07-80e4-0c4c73ffa8cd-kube-api-access-hhxpw\") pod \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.572024 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5089359d-290c-4b07-80e4-0c4c73ffa8cd-logs" (OuterVolumeSpecName: "logs") pod "5089359d-290c-4b07-80e4-0c4c73ffa8cd" (UID: "5089359d-290c-4b07-80e4-0c4c73ffa8cd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.594507 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-scripts" (OuterVolumeSpecName: "scripts") pod "5089359d-290c-4b07-80e4-0c4c73ffa8cd" (UID: "5089359d-290c-4b07-80e4-0c4c73ffa8cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.594644 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5089359d-290c-4b07-80e4-0c4c73ffa8cd-kube-api-access-hhxpw" (OuterVolumeSpecName: "kube-api-access-hhxpw") pod "5089359d-290c-4b07-80e4-0c4c73ffa8cd" (UID: "5089359d-290c-4b07-80e4-0c4c73ffa8cd"). InnerVolumeSpecName "kube-api-access-hhxpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.668410 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-config-data" (OuterVolumeSpecName: "config-data") pod "5089359d-290c-4b07-80e4-0c4c73ffa8cd" (UID: "5089359d-290c-4b07-80e4-0c4c73ffa8cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.671378 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.671401 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.671412 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhxpw\" (UniqueName: \"kubernetes.io/projected/5089359d-290c-4b07-80e4-0c4c73ffa8cd-kube-api-access-hhxpw\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.671421 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5089359d-290c-4b07-80e4-0c4c73ffa8cd-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.708070 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5089359d-290c-4b07-80e4-0c4c73ffa8cd" (UID: "5089359d-290c-4b07-80e4-0c4c73ffa8cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.758581 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5089359d-290c-4b07-80e4-0c4c73ffa8cd" (UID: "5089359d-290c-4b07-80e4-0c4c73ffa8cd"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.773428 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.773459 4751 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.875602 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5089359d-290c-4b07-80e4-0c4c73ffa8cd" (UID: "5089359d-290c-4b07-80e4-0c4c73ffa8cd"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:04.985181 4751 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:05.261046 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:05.261088 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5486cc9958-dvfn2" event={"ID":"5089359d-290c-4b07-80e4-0c4c73ffa8cd","Type":"ContainerDied","Data":"dd3c757afd0458b9ccac3e6359d964949a2b5e06b72b283eb20687517536ba8e"} Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:05.261124 4751 scope.go:117] "RemoveContainer" containerID="744360fb184e7a689ab217ce4f6709b0ff7ab37b1bf6dc2a42f1d4e37e6d2d8f" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:05.312121 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5486cc9958-dvfn2"] Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:05.313220 4751 scope.go:117] "RemoveContainer" containerID="5f448114a9e068f8f50004034c9e2ada11f4f45525f6ffe5d1ccd7a3167a8e23" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:05.325932 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5486cc9958-dvfn2"] Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:05.990062 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5089359d-290c-4b07-80e4-0c4c73ffa8cd" path="/var/lib/kubelet/pods/5089359d-290c-4b07-80e4-0c4c73ffa8cd/volumes" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.358878 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6c448464db-8pmrl"] Jan 30 21:39:06 crc kubenswrapper[4751]: E0130 21:39:06.361396 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5089359d-290c-4b07-80e4-0c4c73ffa8cd" containerName="placement-api" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.361422 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5089359d-290c-4b07-80e4-0c4c73ffa8cd" containerName="placement-api" Jan 30 21:39:06 crc kubenswrapper[4751]: E0130 21:39:06.361448 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5089359d-290c-4b07-80e4-0c4c73ffa8cd" containerName="placement-log" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.361454 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5089359d-290c-4b07-80e4-0c4c73ffa8cd" containerName="placement-log" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.361656 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5089359d-290c-4b07-80e4-0c4c73ffa8cd" containerName="placement-log" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.361670 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5089359d-290c-4b07-80e4-0c4c73ffa8cd" containerName="placement-api" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.362398 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.365895 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-q572p" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.365939 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.366059 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.387805 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6c448464db-8pmrl"] Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.519476 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psx9g\" (UniqueName: \"kubernetes.io/projected/7fea9c34-deff-4930-87b5-c697eb7831d8-kube-api-access-psx9g\") pod \"heat-engine-6c448464db-8pmrl\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.519845 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-config-data\") pod \"heat-engine-6c448464db-8pmrl\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.519968 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-combined-ca-bundle\") pod \"heat-engine-6c448464db-8pmrl\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.520093 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-config-data-custom\") pod \"heat-engine-6c448464db-8pmrl\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.570178 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6b5fd5d955-5ksqz"] Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.571894 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.577593 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.601254 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-g645r"] Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.603444 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.621675 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-config-data-custom\") pod \"heat-engine-6c448464db-8pmrl\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.621757 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psx9g\" (UniqueName: \"kubernetes.io/projected/7fea9c34-deff-4930-87b5-c697eb7831d8-kube-api-access-psx9g\") pod \"heat-engine-6c448464db-8pmrl\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.621793 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-config-data\") pod \"heat-engine-6c448464db-8pmrl\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.621873 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-combined-ca-bundle\") pod \"heat-engine-6c448464db-8pmrl\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.630728 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-config-data-custom\") pod \"heat-engine-6c448464db-8pmrl\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.649307 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-config-data\") pod \"heat-engine-6c448464db-8pmrl\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.649848 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-combined-ca-bundle\") pod \"heat-engine-6c448464db-8pmrl\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.669939 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psx9g\" (UniqueName: \"kubernetes.io/projected/7fea9c34-deff-4930-87b5-c697eb7831d8-kube-api-access-psx9g\") pod \"heat-engine-6c448464db-8pmrl\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.685900 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-cf77776d-s5nbq"] Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.686896 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.687391 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.689650 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.720780 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6b5fd5d955-5ksqz"] Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.723306 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-combined-ca-bundle\") pod \"heat-cfnapi-6b5fd5d955-5ksqz\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.723367 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.723423 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.723464 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-config\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.723492 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.723541 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6c27\" (UniqueName: \"kubernetes.io/projected/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-kube-api-access-j6c27\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.723567 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9njj\" (UniqueName: \"kubernetes.io/projected/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-kube-api-access-m9njj\") pod \"heat-cfnapi-6b5fd5d955-5ksqz\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.723587 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-config-data-custom\") pod \"heat-cfnapi-6b5fd5d955-5ksqz\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.723623 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-config-data\") pod \"heat-cfnapi-6b5fd5d955-5ksqz\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.723685 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.737604 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-g645r"] Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.752663 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-cf77776d-s5nbq"] Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.825664 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6c27\" (UniqueName: \"kubernetes.io/projected/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-kube-api-access-j6c27\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.825708 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9njj\" (UniqueName: \"kubernetes.io/projected/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-kube-api-access-m9njj\") pod \"heat-cfnapi-6b5fd5d955-5ksqz\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.825732 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-config-data-custom\") pod \"heat-cfnapi-6b5fd5d955-5ksqz\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.825770 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-config-data\") pod \"heat-cfnapi-6b5fd5d955-5ksqz\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.825836 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.825858 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwhz4\" (UniqueName: \"kubernetes.io/projected/7782d459-57bc-442e-a471-6c5839d6de47-kube-api-access-vwhz4\") pod \"heat-api-cf77776d-s5nbq\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.825877 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-config-data-custom\") pod \"heat-api-cf77776d-s5nbq\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.825923 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-combined-ca-bundle\") pod \"heat-cfnapi-6b5fd5d955-5ksqz\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.825950 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.825968 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-config-data\") pod \"heat-api-cf77776d-s5nbq\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.825998 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-combined-ca-bundle\") pod \"heat-api-cf77776d-s5nbq\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.826018 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.826087 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-config\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.826112 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.830799 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.831547 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.831835 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-config\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.835914 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.836160 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.839252 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-config-data\") pod \"heat-cfnapi-6b5fd5d955-5ksqz\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.840990 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-config-data-custom\") pod \"heat-cfnapi-6b5fd5d955-5ksqz\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.847549 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-combined-ca-bundle\") pod \"heat-cfnapi-6b5fd5d955-5ksqz\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.853252 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9njj\" (UniqueName: \"kubernetes.io/projected/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-kube-api-access-m9njj\") pod \"heat-cfnapi-6b5fd5d955-5ksqz\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.853802 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6c27\" (UniqueName: \"kubernetes.io/projected/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-kube-api-access-j6c27\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.903078 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.932821 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.934317 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwhz4\" (UniqueName: \"kubernetes.io/projected/7782d459-57bc-442e-a471-6c5839d6de47-kube-api-access-vwhz4\") pod \"heat-api-cf77776d-s5nbq\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.934392 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-config-data-custom\") pod \"heat-api-cf77776d-s5nbq\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.934448 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-config-data\") pod \"heat-api-cf77776d-s5nbq\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.934483 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-combined-ca-bundle\") pod \"heat-api-cf77776d-s5nbq\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.938270 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-combined-ca-bundle\") pod \"heat-api-cf77776d-s5nbq\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.943460 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-config-data-custom\") pod \"heat-api-cf77776d-s5nbq\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.944768 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-config-data\") pod \"heat-api-cf77776d-s5nbq\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.959168 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwhz4\" (UniqueName: \"kubernetes.io/projected/7782d459-57bc-442e-a471-6c5839d6de47-kube-api-access-vwhz4\") pod \"heat-api-cf77776d-s5nbq\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:07 crc kubenswrapper[4751]: I0130 21:39:07.230234 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:07 crc kubenswrapper[4751]: I0130 21:39:07.392921 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6c448464db-8pmrl"] Jan 30 21:39:07 crc kubenswrapper[4751]: I0130 21:39:07.448835 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-586n4" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="registry-server" probeResult="failure" output=< Jan 30 21:39:07 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:39:07 crc kubenswrapper[4751]: > Jan 30 21:39:07 crc kubenswrapper[4751]: I0130 21:39:07.693765 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-g645r"] Jan 30 21:39:07 crc kubenswrapper[4751]: W0130 21:39:07.722281 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c918a5e_396e_4f0a_a68e_babcb03f2f4f.slice/crio-170e124c674cb6797f80484f6f460a674ca21330ea1b870e78baeac2120834e0 WatchSource:0}: Error finding container 170e124c674cb6797f80484f6f460a674ca21330ea1b870e78baeac2120834e0: Status 404 returned error can't find the container with id 170e124c674cb6797f80484f6f460a674ca21330ea1b870e78baeac2120834e0 Jan 30 21:39:07 crc kubenswrapper[4751]: W0130 21:39:07.807841 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8bf4d1e_d4c4_419c_b85b_5553a4996b75.slice/crio-bd07fda2d7a8e027afec3c510cd01ca77e8506c7870cbb605108fca39849bb7a WatchSource:0}: Error finding container bd07fda2d7a8e027afec3c510cd01ca77e8506c7870cbb605108fca39849bb7a: Status 404 returned error can't find the container with id bd07fda2d7a8e027afec3c510cd01ca77e8506c7870cbb605108fca39849bb7a Jan 30 21:39:07 crc kubenswrapper[4751]: I0130 21:39:07.817548 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6b5fd5d955-5ksqz"] Jan 30 21:39:07 crc kubenswrapper[4751]: I0130 21:39:07.954877 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-cf77776d-s5nbq"] Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.267282 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-hx7xn"] Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.268964 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hx7xn" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.284369 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hx7xn"] Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.341438 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6c448464db-8pmrl" event={"ID":"7fea9c34-deff-4930-87b5-c697eb7831d8","Type":"ContainerStarted","Data":"4b306988a2380cbe1d94c21ad11cef6733288cda2590ed761b989755ac079478"} Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.341478 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6c448464db-8pmrl" event={"ID":"7fea9c34-deff-4930-87b5-c697eb7831d8","Type":"ContainerStarted","Data":"c52a6ec70b4ea98c373beb2d768ebee9efb71cdd7d6badafe947f067a081150e"} Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.342690 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.367307 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-cf77776d-s5nbq" event={"ID":"7782d459-57bc-442e-a471-6c5839d6de47","Type":"ContainerStarted","Data":"0cdf7ad51a70ca5bfb45057c9c67833605be258614b207239214e50e206ee1c5"} Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.372267 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6c448464db-8pmrl" podStartSLOduration=2.372235491 podStartE2EDuration="2.372235491s" podCreationTimestamp="2026-01-30 21:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:39:08.367612456 +0000 UTC m=+1487.113435105" watchObservedRunningTime="2026-01-30 21:39:08.372235491 +0000 UTC m=+1487.118058140" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.387909 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7srvr\" (UniqueName: \"kubernetes.io/projected/a169fb7b-bcf8-44d8-8942-a42a4de6001d-kube-api-access-7srvr\") pod \"nova-api-db-create-hx7xn\" (UID: \"a169fb7b-bcf8-44d8-8942-a42a4de6001d\") " pod="openstack/nova-api-db-create-hx7xn" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.387939 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" event={"ID":"b8bf4d1e-d4c4-419c-b85b-5553a4996b75","Type":"ContainerStarted","Data":"bd07fda2d7a8e027afec3c510cd01ca77e8506c7870cbb605108fca39849bb7a"} Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.388061 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a169fb7b-bcf8-44d8-8942-a42a4de6001d-operator-scripts\") pod \"nova-api-db-create-hx7xn\" (UID: \"a169fb7b-bcf8-44d8-8942-a42a4de6001d\") " pod="openstack/nova-api-db-create-hx7xn" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.398295 4751 generic.go:334] "Generic (PLEG): container finished" podID="5c918a5e-396e-4f0a-a68e-babcb03f2f4f" containerID="19625e5a680f498754e1957e0d693d69d11c0c30e1b3f7eadc11af86a948548e" exitCode=0 Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.398356 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-g645r" event={"ID":"5c918a5e-396e-4f0a-a68e-babcb03f2f4f","Type":"ContainerDied","Data":"19625e5a680f498754e1957e0d693d69d11c0c30e1b3f7eadc11af86a948548e"} Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.398390 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-g645r" event={"ID":"5c918a5e-396e-4f0a-a68e-babcb03f2f4f","Type":"ContainerStarted","Data":"170e124c674cb6797f80484f6f460a674ca21330ea1b870e78baeac2120834e0"} Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.399453 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2281-account-create-update-5l5m8"] Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.400930 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2281-account-create-update-5l5m8" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.404373 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.413312 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2281-account-create-update-5l5m8"] Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.489899 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwn4j\" (UniqueName: \"kubernetes.io/projected/a8312bae-69c5-4c31-844e-42a90c18bfd3-kube-api-access-rwn4j\") pod \"nova-api-2281-account-create-update-5l5m8\" (UID: \"a8312bae-69c5-4c31-844e-42a90c18bfd3\") " pod="openstack/nova-api-2281-account-create-update-5l5m8" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.490043 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8312bae-69c5-4c31-844e-42a90c18bfd3-operator-scripts\") pod \"nova-api-2281-account-create-update-5l5m8\" (UID: \"a8312bae-69c5-4c31-844e-42a90c18bfd3\") " pod="openstack/nova-api-2281-account-create-update-5l5m8" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.490079 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7srvr\" (UniqueName: \"kubernetes.io/projected/a169fb7b-bcf8-44d8-8942-a42a4de6001d-kube-api-access-7srvr\") pod \"nova-api-db-create-hx7xn\" (UID: \"a169fb7b-bcf8-44d8-8942-a42a4de6001d\") " pod="openstack/nova-api-db-create-hx7xn" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.490240 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a169fb7b-bcf8-44d8-8942-a42a4de6001d-operator-scripts\") pod \"nova-api-db-create-hx7xn\" (UID: \"a169fb7b-bcf8-44d8-8942-a42a4de6001d\") " pod="openstack/nova-api-db-create-hx7xn" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.491739 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a169fb7b-bcf8-44d8-8942-a42a4de6001d-operator-scripts\") pod \"nova-api-db-create-hx7xn\" (UID: \"a169fb7b-bcf8-44d8-8942-a42a4de6001d\") " pod="openstack/nova-api-db-create-hx7xn" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.500485 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-6d52w"] Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.502109 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6d52w" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.521948 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7srvr\" (UniqueName: \"kubernetes.io/projected/a169fb7b-bcf8-44d8-8942-a42a4de6001d-kube-api-access-7srvr\") pod \"nova-api-db-create-hx7xn\" (UID: \"a169fb7b-bcf8-44d8-8942-a42a4de6001d\") " pod="openstack/nova-api-db-create-hx7xn" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.528284 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6d52w"] Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.588992 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-kj2ld"] Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.592380 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f139e0b-3ae5-4d5c-aa87-f15d00373f98-operator-scripts\") pod \"nova-cell0-db-create-6d52w\" (UID: \"6f139e0b-3ae5-4d5c-aa87-f15d00373f98\") " pod="openstack/nova-cell0-db-create-6d52w" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.592463 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8312bae-69c5-4c31-844e-42a90c18bfd3-operator-scripts\") pod \"nova-api-2281-account-create-update-5l5m8\" (UID: \"a8312bae-69c5-4c31-844e-42a90c18bfd3\") " pod="openstack/nova-api-2281-account-create-update-5l5m8" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.592599 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwn4j\" (UniqueName: \"kubernetes.io/projected/a8312bae-69c5-4c31-844e-42a90c18bfd3-kube-api-access-rwn4j\") pod \"nova-api-2281-account-create-update-5l5m8\" (UID: \"a8312bae-69c5-4c31-844e-42a90c18bfd3\") " pod="openstack/nova-api-2281-account-create-update-5l5m8" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.592623 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slrpq\" (UniqueName: \"kubernetes.io/projected/6f139e0b-3ae5-4d5c-aa87-f15d00373f98-kube-api-access-slrpq\") pod \"nova-cell0-db-create-6d52w\" (UID: \"6f139e0b-3ae5-4d5c-aa87-f15d00373f98\") " pod="openstack/nova-cell0-db-create-6d52w" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.592780 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kj2ld" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.594036 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8312bae-69c5-4c31-844e-42a90c18bfd3-operator-scripts\") pod \"nova-api-2281-account-create-update-5l5m8\" (UID: \"a8312bae-69c5-4c31-844e-42a90c18bfd3\") " pod="openstack/nova-api-2281-account-create-update-5l5m8" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.609727 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cdda-account-create-update-xfmk4"] Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.611395 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cdda-account-create-update-xfmk4" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.614079 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwn4j\" (UniqueName: \"kubernetes.io/projected/a8312bae-69c5-4c31-844e-42a90c18bfd3-kube-api-access-rwn4j\") pod \"nova-api-2281-account-create-update-5l5m8\" (UID: \"a8312bae-69c5-4c31-844e-42a90c18bfd3\") " pod="openstack/nova-api-2281-account-create-update-5l5m8" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.614230 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.642153 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hx7xn" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.643299 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-kj2ld"] Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.699170 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slrpq\" (UniqueName: \"kubernetes.io/projected/6f139e0b-3ae5-4d5c-aa87-f15d00373f98-kube-api-access-slrpq\") pod \"nova-cell0-db-create-6d52w\" (UID: \"6f139e0b-3ae5-4d5c-aa87-f15d00373f98\") " pod="openstack/nova-cell0-db-create-6d52w" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.699329 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/444e34d6-7904-405b-956e-d23aed56537e-operator-scripts\") pod \"nova-cell1-db-create-kj2ld\" (UID: \"444e34d6-7904-405b-956e-d23aed56537e\") " pod="openstack/nova-cell1-db-create-kj2ld" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.699434 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f139e0b-3ae5-4d5c-aa87-f15d00373f98-operator-scripts\") pod \"nova-cell0-db-create-6d52w\" (UID: \"6f139e0b-3ae5-4d5c-aa87-f15d00373f98\") " pod="openstack/nova-cell0-db-create-6d52w" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.699519 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szjrm\" (UniqueName: \"kubernetes.io/projected/f7625d34-2ace-4774-89e4-72729d19ce99-kube-api-access-szjrm\") pod \"nova-cell0-cdda-account-create-update-xfmk4\" (UID: \"f7625d34-2ace-4774-89e4-72729d19ce99\") " pod="openstack/nova-cell0-cdda-account-create-update-xfmk4" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.699688 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdqx8\" (UniqueName: \"kubernetes.io/projected/444e34d6-7904-405b-956e-d23aed56537e-kube-api-access-cdqx8\") pod \"nova-cell1-db-create-kj2ld\" (UID: \"444e34d6-7904-405b-956e-d23aed56537e\") " pod="openstack/nova-cell1-db-create-kj2ld" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.699813 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7625d34-2ace-4774-89e4-72729d19ce99-operator-scripts\") pod \"nova-cell0-cdda-account-create-update-xfmk4\" (UID: \"f7625d34-2ace-4774-89e4-72729d19ce99\") " pod="openstack/nova-cell0-cdda-account-create-update-xfmk4" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.706439 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f139e0b-3ae5-4d5c-aa87-f15d00373f98-operator-scripts\") pod \"nova-cell0-db-create-6d52w\" (UID: \"6f139e0b-3ae5-4d5c-aa87-f15d00373f98\") " pod="openstack/nova-cell0-db-create-6d52w" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.712218 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cdda-account-create-update-xfmk4"] Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.718511 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slrpq\" (UniqueName: \"kubernetes.io/projected/6f139e0b-3ae5-4d5c-aa87-f15d00373f98-kube-api-access-slrpq\") pod \"nova-cell0-db-create-6d52w\" (UID: \"6f139e0b-3ae5-4d5c-aa87-f15d00373f98\") " pod="openstack/nova-cell0-db-create-6d52w" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.726985 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2281-account-create-update-5l5m8" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.802404 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/444e34d6-7904-405b-956e-d23aed56537e-operator-scripts\") pod \"nova-cell1-db-create-kj2ld\" (UID: \"444e34d6-7904-405b-956e-d23aed56537e\") " pod="openstack/nova-cell1-db-create-kj2ld" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.802681 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szjrm\" (UniqueName: \"kubernetes.io/projected/f7625d34-2ace-4774-89e4-72729d19ce99-kube-api-access-szjrm\") pod \"nova-cell0-cdda-account-create-update-xfmk4\" (UID: \"f7625d34-2ace-4774-89e4-72729d19ce99\") " pod="openstack/nova-cell0-cdda-account-create-update-xfmk4" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.803668 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/444e34d6-7904-405b-956e-d23aed56537e-operator-scripts\") pod \"nova-cell1-db-create-kj2ld\" (UID: \"444e34d6-7904-405b-956e-d23aed56537e\") " pod="openstack/nova-cell1-db-create-kj2ld" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.809647 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdqx8\" (UniqueName: \"kubernetes.io/projected/444e34d6-7904-405b-956e-d23aed56537e-kube-api-access-cdqx8\") pod \"nova-cell1-db-create-kj2ld\" (UID: \"444e34d6-7904-405b-956e-d23aed56537e\") " pod="openstack/nova-cell1-db-create-kj2ld" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.809900 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7625d34-2ace-4774-89e4-72729d19ce99-operator-scripts\") pod \"nova-cell0-cdda-account-create-update-xfmk4\" (UID: \"f7625d34-2ace-4774-89e4-72729d19ce99\") " pod="openstack/nova-cell0-cdda-account-create-update-xfmk4" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.811118 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7625d34-2ace-4774-89e4-72729d19ce99-operator-scripts\") pod \"nova-cell0-cdda-account-create-update-xfmk4\" (UID: \"f7625d34-2ace-4774-89e4-72729d19ce99\") " pod="openstack/nova-cell0-cdda-account-create-update-xfmk4" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.816800 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-6e74-account-create-update-gdfb4"] Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.821858 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szjrm\" (UniqueName: \"kubernetes.io/projected/f7625d34-2ace-4774-89e4-72729d19ce99-kube-api-access-szjrm\") pod \"nova-cell0-cdda-account-create-update-xfmk4\" (UID: \"f7625d34-2ace-4774-89e4-72729d19ce99\") " pod="openstack/nova-cell0-cdda-account-create-update-xfmk4" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.823082 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6e74-account-create-update-gdfb4" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.825175 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.841601 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6e74-account-create-update-gdfb4"] Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.842759 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6d52w" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.846287 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdqx8\" (UniqueName: \"kubernetes.io/projected/444e34d6-7904-405b-956e-d23aed56537e-kube-api-access-cdqx8\") pod \"nova-cell1-db-create-kj2ld\" (UID: \"444e34d6-7904-405b-956e-d23aed56537e\") " pod="openstack/nova-cell1-db-create-kj2ld" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.920046 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn5t6\" (UniqueName: \"kubernetes.io/projected/bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5-kube-api-access-wn5t6\") pod \"nova-cell1-6e74-account-create-update-gdfb4\" (UID: \"bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5\") " pod="openstack/nova-cell1-6e74-account-create-update-gdfb4" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.920808 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5-operator-scripts\") pod \"nova-cell1-6e74-account-create-update-gdfb4\" (UID: \"bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5\") " pod="openstack/nova-cell1-6e74-account-create-update-gdfb4" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.973187 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kj2ld" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.976975 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cdda-account-create-update-xfmk4" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.025813 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5-operator-scripts\") pod \"nova-cell1-6e74-account-create-update-gdfb4\" (UID: \"bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5\") " pod="openstack/nova-cell1-6e74-account-create-update-gdfb4" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.029870 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5-operator-scripts\") pod \"nova-cell1-6e74-account-create-update-gdfb4\" (UID: \"bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5\") " pod="openstack/nova-cell1-6e74-account-create-update-gdfb4" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.033570 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn5t6\" (UniqueName: \"kubernetes.io/projected/bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5-kube-api-access-wn5t6\") pod \"nova-cell1-6e74-account-create-update-gdfb4\" (UID: \"bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5\") " pod="openstack/nova-cell1-6e74-account-create-update-gdfb4" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.071062 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn5t6\" (UniqueName: \"kubernetes.io/projected/bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5-kube-api-access-wn5t6\") pod \"nova-cell1-6e74-account-create-update-gdfb4\" (UID: \"bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5\") " pod="openstack/nova-cell1-6e74-account-create-update-gdfb4" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.312994 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6e74-account-create-update-gdfb4" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.320725 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hx7xn"] Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.414015 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-58dc6df599-nmmxw"] Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.418240 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.424028 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.424214 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.424283 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.452162 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hx7xn" event={"ID":"a169fb7b-bcf8-44d8-8942-a42a4de6001d","Type":"ContainerStarted","Data":"1ad9b6e61a78b2578133fbe99f7b252248de03c94cde5f1d03ed88c81b727ad8"} Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.455923 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-58dc6df599-nmmxw"] Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.458780 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-g645r" event={"ID":"5c918a5e-396e-4f0a-a68e-babcb03f2f4f","Type":"ContainerStarted","Data":"eb0f3e62504ef376cb231daedb59d05d53a0fc2c2b6b6606cc3a08b14b2931e8"} Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.458818 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.552861 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnwdb\" (UniqueName: \"kubernetes.io/projected/b9f02a32-18ed-4030-94d6-16f4d0feff52-kube-api-access-vnwdb\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.553012 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9f02a32-18ed-4030-94d6-16f4d0feff52-config-data\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.553037 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9f02a32-18ed-4030-94d6-16f4d0feff52-run-httpd\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.553061 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b9f02a32-18ed-4030-94d6-16f4d0feff52-etc-swift\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.553076 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9f02a32-18ed-4030-94d6-16f4d0feff52-log-httpd\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.553129 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9f02a32-18ed-4030-94d6-16f4d0feff52-public-tls-certs\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.553178 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9f02a32-18ed-4030-94d6-16f4d0feff52-internal-tls-certs\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.553218 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f02a32-18ed-4030-94d6-16f4d0feff52-combined-ca-bundle\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.654895 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnwdb\" (UniqueName: \"kubernetes.io/projected/b9f02a32-18ed-4030-94d6-16f4d0feff52-kube-api-access-vnwdb\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.655061 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9f02a32-18ed-4030-94d6-16f4d0feff52-config-data\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.655098 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9f02a32-18ed-4030-94d6-16f4d0feff52-run-httpd\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.655136 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b9f02a32-18ed-4030-94d6-16f4d0feff52-etc-swift\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.655162 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9f02a32-18ed-4030-94d6-16f4d0feff52-log-httpd\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.655227 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9f02a32-18ed-4030-94d6-16f4d0feff52-public-tls-certs\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.655277 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9f02a32-18ed-4030-94d6-16f4d0feff52-internal-tls-certs\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.655350 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f02a32-18ed-4030-94d6-16f4d0feff52-combined-ca-bundle\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.665215 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9f02a32-18ed-4030-94d6-16f4d0feff52-public-tls-certs\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.665523 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9f02a32-18ed-4030-94d6-16f4d0feff52-log-httpd\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.666689 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f02a32-18ed-4030-94d6-16f4d0feff52-combined-ca-bundle\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.667654 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9f02a32-18ed-4030-94d6-16f4d0feff52-run-httpd\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.679841 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b9f02a32-18ed-4030-94d6-16f4d0feff52-etc-swift\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.680622 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9f02a32-18ed-4030-94d6-16f4d0feff52-config-data\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.687788 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9f02a32-18ed-4030-94d6-16f4d0feff52-internal-tls-certs\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.694515 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnwdb\" (UniqueName: \"kubernetes.io/projected/b9f02a32-18ed-4030-94d6-16f4d0feff52-kube-api-access-vnwdb\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.712281 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688b9f5b49-g645r" podStartSLOduration=3.71225873 podStartE2EDuration="3.71225873s" podCreationTimestamp="2026-01-30 21:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:39:09.505571995 +0000 UTC m=+1488.251394654" watchObservedRunningTime="2026-01-30 21:39:09.71225873 +0000 UTC m=+1488.458081389" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.742915 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2281-account-create-update-5l5m8"] Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.797346 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.964740 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6d52w"] Jan 30 21:39:10 crc kubenswrapper[4751]: I0130 21:39:10.064814 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cdda-account-create-update-xfmk4"] Jan 30 21:39:10 crc kubenswrapper[4751]: I0130 21:39:10.122291 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-kj2ld"] Jan 30 21:39:10 crc kubenswrapper[4751]: I0130 21:39:10.235056 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6e74-account-create-update-gdfb4"] Jan 30 21:39:10 crc kubenswrapper[4751]: I0130 21:39:10.315058 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 21:39:10 crc kubenswrapper[4751]: I0130 21:39:10.498100 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cdda-account-create-update-xfmk4" event={"ID":"f7625d34-2ace-4774-89e4-72729d19ce99","Type":"ContainerStarted","Data":"ea783e09d225076eb9b5cb1322e598d579f1e2d1bcde0d5c5d107891850e5bd8"} Jan 30 21:39:10 crc kubenswrapper[4751]: I0130 21:39:10.501784 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2281-account-create-update-5l5m8" event={"ID":"a8312bae-69c5-4c31-844e-42a90c18bfd3","Type":"ContainerStarted","Data":"95a526791a15478d3cd5022079224ebd6df133da08d333aae07b4e691a9b11fa"} Jan 30 21:39:10 crc kubenswrapper[4751]: I0130 21:39:10.501837 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2281-account-create-update-5l5m8" event={"ID":"a8312bae-69c5-4c31-844e-42a90c18bfd3","Type":"ContainerStarted","Data":"19c59b8a8fccb214b3cbd6a763ad4108fc2b451ba8149317366be3741657e0ba"} Jan 30 21:39:10 crc kubenswrapper[4751]: I0130 21:39:10.507742 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kj2ld" event={"ID":"444e34d6-7904-405b-956e-d23aed56537e","Type":"ContainerStarted","Data":"fa79502d2e5e1b12d71a3aa518d74951310c1854db953285f0dd0bec57e202e2"} Jan 30 21:39:10 crc kubenswrapper[4751]: I0130 21:39:10.514097 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hx7xn" event={"ID":"a169fb7b-bcf8-44d8-8942-a42a4de6001d","Type":"ContainerStarted","Data":"2c32df1a18d1df9fb91c33d8041010429300b75cb742162ff675699f4b703b35"} Jan 30 21:39:10 crc kubenswrapper[4751]: I0130 21:39:10.515707 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6e74-account-create-update-gdfb4" event={"ID":"bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5","Type":"ContainerStarted","Data":"ee9d3fa4ee3aa958761c47e4d6945036a02c56588cf4b2622a33096fc40d3c2f"} Jan 30 21:39:10 crc kubenswrapper[4751]: I0130 21:39:10.520185 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6d52w" event={"ID":"6f139e0b-3ae5-4d5c-aa87-f15d00373f98","Type":"ContainerStarted","Data":"1412d3e968a704f0b25e82ec780f504270d2155c0b3632b09d545841f20c56f1"} Jan 30 21:39:10 crc kubenswrapper[4751]: I0130 21:39:10.549466 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-hx7xn" podStartSLOduration=2.549443602 podStartE2EDuration="2.549443602s" podCreationTimestamp="2026-01-30 21:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:39:10.536079494 +0000 UTC m=+1489.281902143" watchObservedRunningTime="2026-01-30 21:39:10.549443602 +0000 UTC m=+1489.295266241" Jan 30 21:39:10 crc kubenswrapper[4751]: I0130 21:39:10.670636 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-58dc6df599-nmmxw"] Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.531466 4751 generic.go:334] "Generic (PLEG): container finished" podID="bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5" containerID="801374aeb4ac1cff7c0c384bd6f348009c3a008674d2c7a597e16dd316c97dcd" exitCode=0 Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.531648 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6e74-account-create-update-gdfb4" event={"ID":"bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5","Type":"ContainerDied","Data":"801374aeb4ac1cff7c0c384bd6f348009c3a008674d2c7a597e16dd316c97dcd"} Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.534167 4751 generic.go:334] "Generic (PLEG): container finished" podID="f7625d34-2ace-4774-89e4-72729d19ce99" containerID="819319f0811868394aa97eff76f3853ec44f21bc4e3fff54753bf1a73c6cb040" exitCode=0 Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.534222 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cdda-account-create-update-xfmk4" event={"ID":"f7625d34-2ace-4774-89e4-72729d19ce99","Type":"ContainerDied","Data":"819319f0811868394aa97eff76f3853ec44f21bc4e3fff54753bf1a73c6cb040"} Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.536058 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58dc6df599-nmmxw" event={"ID":"b9f02a32-18ed-4030-94d6-16f4d0feff52","Type":"ContainerStarted","Data":"7cb7f976b28f6f8b7ee1f2e7f06487f11fa8eada6b3bdc7962a7361b72b14548"} Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.536088 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58dc6df599-nmmxw" event={"ID":"b9f02a32-18ed-4030-94d6-16f4d0feff52","Type":"ContainerStarted","Data":"ea7a89d696450ca8fb338e8cd72613862a8a03b6f7bb703ee3898333c4a58ed1"} Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.537939 4751 generic.go:334] "Generic (PLEG): container finished" podID="a169fb7b-bcf8-44d8-8942-a42a4de6001d" containerID="2c32df1a18d1df9fb91c33d8041010429300b75cb742162ff675699f4b703b35" exitCode=0 Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.537985 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hx7xn" event={"ID":"a169fb7b-bcf8-44d8-8942-a42a4de6001d","Type":"ContainerDied","Data":"2c32df1a18d1df9fb91c33d8041010429300b75cb742162ff675699f4b703b35"} Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.540934 4751 generic.go:334] "Generic (PLEG): container finished" podID="6f139e0b-3ae5-4d5c-aa87-f15d00373f98" containerID="653c6822da8dfb62b9974deaabbf6807b6ceb59b253232e41aa972ac9d77b452" exitCode=0 Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.540990 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6d52w" event={"ID":"6f139e0b-3ae5-4d5c-aa87-f15d00373f98","Type":"ContainerDied","Data":"653c6822da8dfb62b9974deaabbf6807b6ceb59b253232e41aa972ac9d77b452"} Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.545486 4751 generic.go:334] "Generic (PLEG): container finished" podID="a8312bae-69c5-4c31-844e-42a90c18bfd3" containerID="95a526791a15478d3cd5022079224ebd6df133da08d333aae07b4e691a9b11fa" exitCode=0 Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.545584 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2281-account-create-update-5l5m8" event={"ID":"a8312bae-69c5-4c31-844e-42a90c18bfd3","Type":"ContainerDied","Data":"95a526791a15478d3cd5022079224ebd6df133da08d333aae07b4e691a9b11fa"} Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.551163 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kj2ld" event={"ID":"444e34d6-7904-405b-956e-d23aed56537e","Type":"ContainerDied","Data":"1b2d27c5fa8a33163c2a6acc216d5d997d31face25a7d5b27edce913d857e2cf"} Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.552918 4751 generic.go:334] "Generic (PLEG): container finished" podID="444e34d6-7904-405b-956e-d23aed56537e" containerID="1b2d27c5fa8a33163c2a6acc216d5d997d31face25a7d5b27edce913d857e2cf" exitCode=0 Jan 30 21:39:12 crc kubenswrapper[4751]: I0130 21:39:12.564141 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:39:12 crc kubenswrapper[4751]: I0130 21:39:12.564822 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="ceilometer-central-agent" containerID="cri-o://57c96884c7e80a6d477536792bb89b73f1542edc832a38be9d3a266693f347ec" gracePeriod=30 Jan 30 21:39:12 crc kubenswrapper[4751]: I0130 21:39:12.565553 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="ceilometer-notification-agent" containerID="cri-o://645f73160e3f56e7ed531a836f7c9a1561da7bc5259b4db0442d52545c4d2302" gracePeriod=30 Jan 30 21:39:12 crc kubenswrapper[4751]: I0130 21:39:12.565543 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="proxy-httpd" containerID="cri-o://e9063569ccc33e77085e2b00951a57c6def385f3008e41cdbaf9636a0f5b353f" gracePeriod=30 Jan 30 21:39:12 crc kubenswrapper[4751]: I0130 21:39:12.565584 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="sg-core" containerID="cri-o://b5aac7d6f497e2328bb417b25d63cf92f6851dadf3db5e57dc476e250917965b" gracePeriod=30 Jan 30 21:39:12 crc kubenswrapper[4751]: I0130 21:39:12.582805 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.424888 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cdda-account-create-update-xfmk4" Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.510957 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.541237 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szjrm\" (UniqueName: \"kubernetes.io/projected/f7625d34-2ace-4774-89e4-72729d19ce99-kube-api-access-szjrm\") pod \"f7625d34-2ace-4774-89e4-72729d19ce99\" (UID: \"f7625d34-2ace-4774-89e4-72729d19ce99\") " Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.541411 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7625d34-2ace-4774-89e4-72729d19ce99-operator-scripts\") pod \"f7625d34-2ace-4774-89e4-72729d19ce99\" (UID: \"f7625d34-2ace-4774-89e4-72729d19ce99\") " Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.542529 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7625d34-2ace-4774-89e4-72729d19ce99-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f7625d34-2ace-4774-89e4-72729d19ce99" (UID: "f7625d34-2ace-4774-89e4-72729d19ce99"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.554589 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7625d34-2ace-4774-89e4-72729d19ce99-kube-api-access-szjrm" (OuterVolumeSpecName: "kube-api-access-szjrm") pod "f7625d34-2ace-4774-89e4-72729d19ce99" (UID: "f7625d34-2ace-4774-89e4-72729d19ce99"). InnerVolumeSpecName "kube-api-access-szjrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.611692 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-566dccff6-ddvxf"] Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.611921 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-566dccff6-ddvxf" podUID="11052d78-74b6-472a-aaba-513368f51ce3" containerName="neutron-api" containerID="cri-o://2ba96d5744b69d3f9276be5b0e9715862e0020d80034d236fcecc5d9420b54cc" gracePeriod=30 Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.612047 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-566dccff6-ddvxf" podUID="11052d78-74b6-472a-aaba-513368f51ce3" containerName="neutron-httpd" containerID="cri-o://74aefce86656a68e812b38f7658b2359076b62078ffd0f3974807d58363f94b0" gracePeriod=30 Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.633631 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cdda-account-create-update-xfmk4" event={"ID":"f7625d34-2ace-4774-89e4-72729d19ce99","Type":"ContainerDied","Data":"ea783e09d225076eb9b5cb1322e598d579f1e2d1bcde0d5c5d107891850e5bd8"} Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.633674 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea783e09d225076eb9b5cb1322e598d579f1e2d1bcde0d5c5d107891850e5bd8" Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.633743 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cdda-account-create-update-xfmk4" Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.644576 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7625d34-2ace-4774-89e4-72729d19ce99-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.644605 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szjrm\" (UniqueName: \"kubernetes.io/projected/f7625d34-2ace-4774-89e4-72729d19ce99-kube-api-access-szjrm\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.694190 4751 generic.go:334] "Generic (PLEG): container finished" podID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerID="e9063569ccc33e77085e2b00951a57c6def385f3008e41cdbaf9636a0f5b353f" exitCode=0 Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.696555 4751 generic.go:334] "Generic (PLEG): container finished" podID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerID="b5aac7d6f497e2328bb417b25d63cf92f6851dadf3db5e57dc476e250917965b" exitCode=2 Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.696564 4751 generic.go:334] "Generic (PLEG): container finished" podID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerID="57c96884c7e80a6d477536792bb89b73f1542edc832a38be9d3a266693f347ec" exitCode=0 Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.696585 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca","Type":"ContainerDied","Data":"e9063569ccc33e77085e2b00951a57c6def385f3008e41cdbaf9636a0f5b353f"} Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.696619 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca","Type":"ContainerDied","Data":"b5aac7d6f497e2328bb417b25d63cf92f6851dadf3db5e57dc476e250917965b"} Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.696629 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca","Type":"ContainerDied","Data":"57c96884c7e80a6d477536792bb89b73f1542edc832a38be9d3a266693f347ec"} Jan 30 21:39:14 crc kubenswrapper[4751]: I0130 21:39:14.714574 4751 generic.go:334] "Generic (PLEG): container finished" podID="11052d78-74b6-472a-aaba-513368f51ce3" containerID="74aefce86656a68e812b38f7658b2359076b62078ffd0f3974807d58363f94b0" exitCode=0 Jan 30 21:39:14 crc kubenswrapper[4751]: I0130 21:39:14.714664 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-566dccff6-ddvxf" event={"ID":"11052d78-74b6-472a-aaba-513368f51ce3","Type":"ContainerDied","Data":"74aefce86656a68e812b38f7658b2359076b62078ffd0f3974807d58363f94b0"} Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.155762 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-d9fcd4c7f-gcp2z"] Jan 30 21:39:15 crc kubenswrapper[4751]: E0130 21:39:15.156219 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7625d34-2ace-4774-89e4-72729d19ce99" containerName="mariadb-account-create-update" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.156235 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7625d34-2ace-4774-89e4-72729d19ce99" containerName="mariadb-account-create-update" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.156499 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7625d34-2ace-4774-89e4-72729d19ce99" containerName="mariadb-account-create-update" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.157193 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.199598 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6ffb596769-rgv47"] Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.201088 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.228936 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-d9fcd4c7f-gcp2z"] Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.250641 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-84f9b8dd8f-qtmlz"] Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.252179 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.284795 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m6kn\" (UniqueName: \"kubernetes.io/projected/191c5874-d3f0-4a2b-adcf-8ceed228e459-kube-api-access-5m6kn\") pod \"heat-engine-d9fcd4c7f-gcp2z\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.284893 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-config-data-custom\") pod \"heat-engine-d9fcd4c7f-gcp2z\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.284913 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-combined-ca-bundle\") pod \"heat-engine-d9fcd4c7f-gcp2z\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.284938 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-config-data\") pod \"heat-engine-d9fcd4c7f-gcp2z\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.292501 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6ffb596769-rgv47"] Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.303412 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-84f9b8dd8f-qtmlz"] Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.386486 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-config-data-custom\") pod \"heat-api-84f9b8dd8f-qtmlz\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.386539 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-config-data\") pod \"heat-cfnapi-6ffb596769-rgv47\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.386636 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq89d\" (UniqueName: \"kubernetes.io/projected/826635d2-0549-4d63-84e2-3ba3cdf85db4-kube-api-access-zq89d\") pod \"heat-cfnapi-6ffb596769-rgv47\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.386676 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzjlk\" (UniqueName: \"kubernetes.io/projected/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-kube-api-access-vzjlk\") pod \"heat-api-84f9b8dd8f-qtmlz\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.386711 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-config-data\") pod \"heat-api-84f9b8dd8f-qtmlz\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.386793 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-combined-ca-bundle\") pod \"heat-cfnapi-6ffb596769-rgv47\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.386860 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m6kn\" (UniqueName: \"kubernetes.io/projected/191c5874-d3f0-4a2b-adcf-8ceed228e459-kube-api-access-5m6kn\") pod \"heat-engine-d9fcd4c7f-gcp2z\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.386891 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-config-data-custom\") pod \"heat-cfnapi-6ffb596769-rgv47\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.386925 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-combined-ca-bundle\") pod \"heat-api-84f9b8dd8f-qtmlz\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.386991 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-config-data-custom\") pod \"heat-engine-d9fcd4c7f-gcp2z\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.387008 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-combined-ca-bundle\") pod \"heat-engine-d9fcd4c7f-gcp2z\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.387026 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-config-data\") pod \"heat-engine-d9fcd4c7f-gcp2z\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.413290 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-config-data-custom\") pod \"heat-engine-d9fcd4c7f-gcp2z\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.413656 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-config-data\") pod \"heat-engine-d9fcd4c7f-gcp2z\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.416615 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-combined-ca-bundle\") pod \"heat-engine-d9fcd4c7f-gcp2z\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.418511 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m6kn\" (UniqueName: \"kubernetes.io/projected/191c5874-d3f0-4a2b-adcf-8ceed228e459-kube-api-access-5m6kn\") pod \"heat-engine-d9fcd4c7f-gcp2z\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.489898 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq89d\" (UniqueName: \"kubernetes.io/projected/826635d2-0549-4d63-84e2-3ba3cdf85db4-kube-api-access-zq89d\") pod \"heat-cfnapi-6ffb596769-rgv47\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.489988 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzjlk\" (UniqueName: \"kubernetes.io/projected/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-kube-api-access-vzjlk\") pod \"heat-api-84f9b8dd8f-qtmlz\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.490036 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-config-data\") pod \"heat-api-84f9b8dd8f-qtmlz\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.490071 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-combined-ca-bundle\") pod \"heat-cfnapi-6ffb596769-rgv47\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.490134 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-config-data-custom\") pod \"heat-cfnapi-6ffb596769-rgv47\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.490167 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-combined-ca-bundle\") pod \"heat-api-84f9b8dd8f-qtmlz\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.490235 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-config-data-custom\") pod \"heat-api-84f9b8dd8f-qtmlz\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.490262 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-config-data\") pod \"heat-cfnapi-6ffb596769-rgv47\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.498977 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-config-data\") pod \"heat-cfnapi-6ffb596769-rgv47\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.503362 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-config-data-custom\") pod \"heat-api-84f9b8dd8f-qtmlz\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.506858 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-config-data-custom\") pod \"heat-cfnapi-6ffb596769-rgv47\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.507101 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-combined-ca-bundle\") pod \"heat-cfnapi-6ffb596769-rgv47\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.508549 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-combined-ca-bundle\") pod \"heat-api-84f9b8dd8f-qtmlz\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.514764 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-config-data\") pod \"heat-api-84f9b8dd8f-qtmlz\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.526504 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.526756 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzjlk\" (UniqueName: \"kubernetes.io/projected/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-kube-api-access-vzjlk\") pod \"heat-api-84f9b8dd8f-qtmlz\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.530127 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq89d\" (UniqueName: \"kubernetes.io/projected/826635d2-0549-4d63-84e2-3ba3cdf85db4-kube-api-access-zq89d\") pod \"heat-cfnapi-6ffb596769-rgv47\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.534116 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.578521 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:16 crc kubenswrapper[4751]: I0130 21:39:16.495087 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.217:3000/\": dial tcp 10.217.0.217:3000: connect: connection refused" Jan 30 21:39:16 crc kubenswrapper[4751]: I0130 21:39:16.759418 4751 generic.go:334] "Generic (PLEG): container finished" podID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerID="645f73160e3f56e7ed531a836f7c9a1561da7bc5259b4db0442d52545c4d2302" exitCode=0 Jan 30 21:39:16 crc kubenswrapper[4751]: I0130 21:39:16.759452 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca","Type":"ContainerDied","Data":"645f73160e3f56e7ed531a836f7c9a1561da7bc5259b4db0442d52545c4d2302"} Jan 30 21:39:16 crc kubenswrapper[4751]: I0130 21:39:16.935534 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.028761 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-hb44m"] Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.028975 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" podUID="5a85ff98-c5c9-4735-ad9d-3c987976bd2f" containerName="dnsmasq-dns" containerID="cri-o://d9b252a19e1756dc14b8604eb4ec0d16757d20c0506507f763599f15997045f8" gracePeriod=10 Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.231668 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" podUID="5a85ff98-c5c9-4735-ad9d-3c987976bd2f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.214:5353: connect: connection refused" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.422084 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-586n4" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="registry-server" probeResult="failure" output=< Jan 30 21:39:17 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:39:17 crc kubenswrapper[4751]: > Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.749754 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-cf77776d-s5nbq"] Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.760533 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6b5fd5d955-5ksqz"] Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.779417 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-d6c877d68-9ktwv"] Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.781038 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.783962 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.784114 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.788931 4751 generic.go:334] "Generic (PLEG): container finished" podID="5a85ff98-c5c9-4735-ad9d-3c987976bd2f" containerID="d9b252a19e1756dc14b8604eb4ec0d16757d20c0506507f763599f15997045f8" exitCode=0 Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.788976 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" event={"ID":"5a85ff98-c5c9-4735-ad9d-3c987976bd2f","Type":"ContainerDied","Data":"d9b252a19e1756dc14b8604eb4ec0d16757d20c0506507f763599f15997045f8"} Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.803733 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-d6c877d68-9ktwv"] Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.814750 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6f4bd4b69-ntk8n"] Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.819531 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.822194 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.822696 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.843030 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6f4bd4b69-ntk8n"] Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.844903 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-internal-tls-certs\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.844977 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-combined-ca-bundle\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.845094 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-config-data\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.845190 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-config-data-custom\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.845229 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjzpc\" (UniqueName: \"kubernetes.io/projected/8a808a38-f939-4b4f-8386-e177712737d6-kube-api-access-wjzpc\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.845248 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-public-tls-certs\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.947761 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-internal-tls-certs\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.947810 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-config-data\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.947858 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-combined-ca-bundle\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.947921 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-public-tls-certs\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.948098 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-config-data\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.948192 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-internal-tls-certs\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.948278 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-config-data-custom\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.948376 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjzpc\" (UniqueName: \"kubernetes.io/projected/8a808a38-f939-4b4f-8386-e177712737d6-kube-api-access-wjzpc\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.948461 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-public-tls-certs\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.951399 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7dxt\" (UniqueName: \"kubernetes.io/projected/43d36aef-fb14-4701-8931-9aaa96d049a9-kube-api-access-n7dxt\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.951449 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-config-data-custom\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.951495 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-combined-ca-bundle\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.953745 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-internal-tls-certs\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.953795 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-public-tls-certs\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.954235 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-config-data-custom\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.970503 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-config-data\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.971091 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-combined-ca-bundle\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.972760 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjzpc\" (UniqueName: \"kubernetes.io/projected/8a808a38-f939-4b4f-8386-e177712737d6-kube-api-access-wjzpc\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.065233 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7dxt\" (UniqueName: \"kubernetes.io/projected/43d36aef-fb14-4701-8931-9aaa96d049a9-kube-api-access-n7dxt\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.065288 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-config-data-custom\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.065313 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-combined-ca-bundle\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.065394 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-config-data\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.065539 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-public-tls-certs\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.065622 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-internal-tls-certs\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.069553 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-config-data-custom\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.069761 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-config-data\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.069883 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-internal-tls-certs\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.072542 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-public-tls-certs\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.076033 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-combined-ca-bundle\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.084902 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7dxt\" (UniqueName: \"kubernetes.io/projected/43d36aef-fb14-4701-8931-9aaa96d049a9-kube-api-access-n7dxt\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.114079 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.138016 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:19 crc kubenswrapper[4751]: I0130 21:39:19.813179 4751 generic.go:334] "Generic (PLEG): container finished" podID="11052d78-74b6-472a-aaba-513368f51ce3" containerID="2ba96d5744b69d3f9276be5b0e9715862e0020d80034d236fcecc5d9420b54cc" exitCode=0 Jan 30 21:39:19 crc kubenswrapper[4751]: I0130 21:39:19.813244 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-566dccff6-ddvxf" event={"ID":"11052d78-74b6-472a-aaba-513368f51ce3","Type":"ContainerDied","Data":"2ba96d5744b69d3f9276be5b0e9715862e0020d80034d236fcecc5d9420b54cc"} Jan 30 21:39:21 crc kubenswrapper[4751]: I0130 21:39:21.766714 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6d52w" Jan 30 21:39:21 crc kubenswrapper[4751]: E0130 21:39:21.773437 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Jan 30 21:39:21 crc kubenswrapper[4751]: E0130 21:39:21.773895 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n695h699h687h695h689h5f7hfch66chcfh88h5dbh697h54bh545h5c7h697h587hc5h549hb7h88h55bh648h56fh67chb8hb7h54dh64ch667hdh9bq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-46mvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(af93872a-62a1-407c-9932-2afb4313f457): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:39:21 crc kubenswrapper[4751]: E0130 21:39:21.775013 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="af93872a-62a1-407c-9932-2afb4313f457" Jan 30 21:39:21 crc kubenswrapper[4751]: I0130 21:39:21.851783 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6d52w" Jan 30 21:39:21 crc kubenswrapper[4751]: I0130 21:39:21.852341 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6d52w" event={"ID":"6f139e0b-3ae5-4d5c-aa87-f15d00373f98","Type":"ContainerDied","Data":"1412d3e968a704f0b25e82ec780f504270d2155c0b3632b09d545841f20c56f1"} Jan 30 21:39:21 crc kubenswrapper[4751]: I0130 21:39:21.852412 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1412d3e968a704f0b25e82ec780f504270d2155c0b3632b09d545841f20c56f1" Jan 30 21:39:21 crc kubenswrapper[4751]: E0130 21:39:21.859002 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="af93872a-62a1-407c-9932-2afb4313f457" Jan 30 21:39:21 crc kubenswrapper[4751]: I0130 21:39:21.876048 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f139e0b-3ae5-4d5c-aa87-f15d00373f98-operator-scripts\") pod \"6f139e0b-3ae5-4d5c-aa87-f15d00373f98\" (UID: \"6f139e0b-3ae5-4d5c-aa87-f15d00373f98\") " Jan 30 21:39:21 crc kubenswrapper[4751]: I0130 21:39:21.876279 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slrpq\" (UniqueName: \"kubernetes.io/projected/6f139e0b-3ae5-4d5c-aa87-f15d00373f98-kube-api-access-slrpq\") pod \"6f139e0b-3ae5-4d5c-aa87-f15d00373f98\" (UID: \"6f139e0b-3ae5-4d5c-aa87-f15d00373f98\") " Jan 30 21:39:21 crc kubenswrapper[4751]: I0130 21:39:21.876549 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f139e0b-3ae5-4d5c-aa87-f15d00373f98-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6f139e0b-3ae5-4d5c-aa87-f15d00373f98" (UID: "6f139e0b-3ae5-4d5c-aa87-f15d00373f98"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:39:21 crc kubenswrapper[4751]: I0130 21:39:21.876933 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f139e0b-3ae5-4d5c-aa87-f15d00373f98-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:21 crc kubenswrapper[4751]: I0130 21:39:21.886461 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f139e0b-3ae5-4d5c-aa87-f15d00373f98-kube-api-access-slrpq" (OuterVolumeSpecName: "kube-api-access-slrpq") pod "6f139e0b-3ae5-4d5c-aa87-f15d00373f98" (UID: "6f139e0b-3ae5-4d5c-aa87-f15d00373f98"). InnerVolumeSpecName "kube-api-access-slrpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:21 crc kubenswrapper[4751]: I0130 21:39:21.984724 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slrpq\" (UniqueName: \"kubernetes.io/projected/6f139e0b-3ae5-4d5c-aa87-f15d00373f98-kube-api-access-slrpq\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.085061 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6e74-account-create-update-gdfb4" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.163759 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kj2ld" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.189951 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hx7xn" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.190585 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5-operator-scripts\") pod \"bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5\" (UID: \"bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.190679 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn5t6\" (UniqueName: \"kubernetes.io/projected/bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5-kube-api-access-wn5t6\") pod \"bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5\" (UID: \"bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.191841 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5" (UID: "bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.192862 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.209482 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5-kube-api-access-wn5t6" (OuterVolumeSpecName: "kube-api-access-wn5t6") pod "bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5" (UID: "bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5"). InnerVolumeSpecName "kube-api-access-wn5t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.233373 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" podUID="5a85ff98-c5c9-4735-ad9d-3c987976bd2f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.214:5353: connect: connection refused" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.233957 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2281-account-create-update-5l5m8" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.294434 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/444e34d6-7904-405b-956e-d23aed56537e-operator-scripts\") pod \"444e34d6-7904-405b-956e-d23aed56537e\" (UID: \"444e34d6-7904-405b-956e-d23aed56537e\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.294972 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a169fb7b-bcf8-44d8-8942-a42a4de6001d-operator-scripts\") pod \"a169fb7b-bcf8-44d8-8942-a42a4de6001d\" (UID: \"a169fb7b-bcf8-44d8-8942-a42a4de6001d\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.295006 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8312bae-69c5-4c31-844e-42a90c18bfd3-operator-scripts\") pod \"a8312bae-69c5-4c31-844e-42a90c18bfd3\" (UID: \"a8312bae-69c5-4c31-844e-42a90c18bfd3\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.295051 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwn4j\" (UniqueName: \"kubernetes.io/projected/a8312bae-69c5-4c31-844e-42a90c18bfd3-kube-api-access-rwn4j\") pod \"a8312bae-69c5-4c31-844e-42a90c18bfd3\" (UID: \"a8312bae-69c5-4c31-844e-42a90c18bfd3\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.295076 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdqx8\" (UniqueName: \"kubernetes.io/projected/444e34d6-7904-405b-956e-d23aed56537e-kube-api-access-cdqx8\") pod \"444e34d6-7904-405b-956e-d23aed56537e\" (UID: \"444e34d6-7904-405b-956e-d23aed56537e\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.295116 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7srvr\" (UniqueName: \"kubernetes.io/projected/a169fb7b-bcf8-44d8-8942-a42a4de6001d-kube-api-access-7srvr\") pod \"a169fb7b-bcf8-44d8-8942-a42a4de6001d\" (UID: \"a169fb7b-bcf8-44d8-8942-a42a4de6001d\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.295645 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn5t6\" (UniqueName: \"kubernetes.io/projected/bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5-kube-api-access-wn5t6\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.295678 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8312bae-69c5-4c31-844e-42a90c18bfd3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a8312bae-69c5-4c31-844e-42a90c18bfd3" (UID: "a8312bae-69c5-4c31-844e-42a90c18bfd3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.296017 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/444e34d6-7904-405b-956e-d23aed56537e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "444e34d6-7904-405b-956e-d23aed56537e" (UID: "444e34d6-7904-405b-956e-d23aed56537e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.296086 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a169fb7b-bcf8-44d8-8942-a42a4de6001d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a169fb7b-bcf8-44d8-8942-a42a4de6001d" (UID: "a169fb7b-bcf8-44d8-8942-a42a4de6001d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.308469 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a169fb7b-bcf8-44d8-8942-a42a4de6001d-kube-api-access-7srvr" (OuterVolumeSpecName: "kube-api-access-7srvr") pod "a169fb7b-bcf8-44d8-8942-a42a4de6001d" (UID: "a169fb7b-bcf8-44d8-8942-a42a4de6001d"). InnerVolumeSpecName "kube-api-access-7srvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.311130 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/444e34d6-7904-405b-956e-d23aed56537e-kube-api-access-cdqx8" (OuterVolumeSpecName: "kube-api-access-cdqx8") pod "444e34d6-7904-405b-956e-d23aed56537e" (UID: "444e34d6-7904-405b-956e-d23aed56537e"). InnerVolumeSpecName "kube-api-access-cdqx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.329677 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8312bae-69c5-4c31-844e-42a90c18bfd3-kube-api-access-rwn4j" (OuterVolumeSpecName: "kube-api-access-rwn4j") pod "a8312bae-69c5-4c31-844e-42a90c18bfd3" (UID: "a8312bae-69c5-4c31-844e-42a90c18bfd3"). InnerVolumeSpecName "kube-api-access-rwn4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.397316 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/444e34d6-7904-405b-956e-d23aed56537e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.397371 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a169fb7b-bcf8-44d8-8942-a42a4de6001d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.397382 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8312bae-69c5-4c31-844e-42a90c18bfd3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.397392 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwn4j\" (UniqueName: \"kubernetes.io/projected/a8312bae-69c5-4c31-844e-42a90c18bfd3-kube-api-access-rwn4j\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.397404 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdqx8\" (UniqueName: \"kubernetes.io/projected/444e34d6-7904-405b-956e-d23aed56537e-kube-api-access-cdqx8\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.397412 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7srvr\" (UniqueName: \"kubernetes.io/projected/a169fb7b-bcf8-44d8-8942-a42a4de6001d-kube-api-access-7srvr\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.531182 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.605862 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-sg-core-conf-yaml\") pod \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.605925 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-log-httpd\") pod \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.605984 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-config-data\") pod \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.606082 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-run-httpd\") pod \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.606115 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-scripts\") pod \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.606162 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtcg6\" (UniqueName: \"kubernetes.io/projected/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-kube-api-access-xtcg6\") pod \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.606201 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-combined-ca-bundle\") pod \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.613044 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" (UID: "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.613341 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" (UID: "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.616689 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-kube-api-access-xtcg6" (OuterVolumeSpecName: "kube-api-access-xtcg6") pod "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" (UID: "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca"). InnerVolumeSpecName "kube-api-access-xtcg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.625542 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-scripts" (OuterVolumeSpecName: "scripts") pod "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" (UID: "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.647937 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" (UID: "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.716270 4751 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.716302 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.716310 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtcg6\" (UniqueName: \"kubernetes.io/projected/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-kube-api-access-xtcg6\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.716337 4751 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.716344 4751 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.979868 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca","Type":"ContainerDied","Data":"566588f45edf254c38a8bd2cb4cecfcf41053da3f96016aee3abcebf59acf4a0"} Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.980177 4751 scope.go:117] "RemoveContainer" containerID="e9063569ccc33e77085e2b00951a57c6def385f3008e41cdbaf9636a0f5b353f" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.979948 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.986565 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" event={"ID":"b8bf4d1e-d4c4-419c-b85b-5553a4996b75","Type":"ContainerStarted","Data":"2904e4316cbba4a0972ec09c8b2f6ced0ccda32823e6c191ae184b163790800d"} Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.986780 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" podUID="b8bf4d1e-d4c4-419c-b85b-5553a4996b75" containerName="heat-cfnapi" containerID="cri-o://2904e4316cbba4a0972ec09c8b2f6ced0ccda32823e6c191ae184b163790800d" gracePeriod=60 Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.986771 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.990680 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" event={"ID":"5a85ff98-c5c9-4735-ad9d-3c987976bd2f","Type":"ContainerDied","Data":"d4ad9ad89c73105ea7a484e1db33eb7c6d8564b633625c6640e82ad596737a10"} Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.990718 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4ad9ad89c73105ea7a484e1db33eb7c6d8564b633625c6640e82ad596737a10" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.992156 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2281-account-create-update-5l5m8" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.992438 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2281-account-create-update-5l5m8" event={"ID":"a8312bae-69c5-4c31-844e-42a90c18bfd3","Type":"ContainerDied","Data":"19c59b8a8fccb214b3cbd6a763ad4108fc2b451ba8149317366be3741657e0ba"} Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.992492 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19c59b8a8fccb214b3cbd6a763ad4108fc2b451ba8149317366be3741657e0ba" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.011960 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kj2ld" event={"ID":"444e34d6-7904-405b-956e-d23aed56537e","Type":"ContainerDied","Data":"fa79502d2e5e1b12d71a3aa518d74951310c1854db953285f0dd0bec57e202e2"} Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.011997 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa79502d2e5e1b12d71a3aa518d74951310c1854db953285f0dd0bec57e202e2" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.012064 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kj2ld" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.022463 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" podStartSLOduration=2.966384493 podStartE2EDuration="17.022444856s" podCreationTimestamp="2026-01-30 21:39:06 +0000 UTC" firstStartedPulling="2026-01-30 21:39:07.821164371 +0000 UTC m=+1486.566987020" lastFinishedPulling="2026-01-30 21:39:21.877224734 +0000 UTC m=+1500.623047383" observedRunningTime="2026-01-30 21:39:23.005647606 +0000 UTC m=+1501.751470255" watchObservedRunningTime="2026-01-30 21:39:23.022444856 +0000 UTC m=+1501.768267505" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.027458 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hx7xn" event={"ID":"a169fb7b-bcf8-44d8-8942-a42a4de6001d","Type":"ContainerDied","Data":"1ad9b6e61a78b2578133fbe99f7b252248de03c94cde5f1d03ed88c81b727ad8"} Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.027499 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ad9b6e61a78b2578133fbe99f7b252248de03c94cde5f1d03ed88c81b727ad8" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.027551 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hx7xn" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.041406 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58dc6df599-nmmxw" event={"ID":"b9f02a32-18ed-4030-94d6-16f4d0feff52","Type":"ContainerStarted","Data":"51aee9e12458cb0b79d279b34f670c75a19df0eec0106e44ca9f07968777899b"} Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.042130 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.042215 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.050986 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6e74-account-create-update-gdfb4" event={"ID":"bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5","Type":"ContainerDied","Data":"ee9d3fa4ee3aa958761c47e4d6945036a02c56588cf4b2622a33096fc40d3c2f"} Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.051038 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee9d3fa4ee3aa958761c47e4d6945036a02c56588cf4b2622a33096fc40d3c2f" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.051185 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6e74-account-create-update-gdfb4" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.058313 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-58dc6df599-nmmxw" podUID="b9f02a32-18ed-4030-94d6-16f4d0feff52" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.069977 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-58dc6df599-nmmxw" podStartSLOduration=14.069961399 podStartE2EDuration="14.069961399s" podCreationTimestamp="2026-01-30 21:39:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:39:23.068591812 +0000 UTC m=+1501.814414531" watchObservedRunningTime="2026-01-30 21:39:23.069961399 +0000 UTC m=+1501.815784038" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.138543 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-config-data" (OuterVolumeSpecName: "config-data") pod "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" (UID: "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.143473 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" (UID: "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.149925 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.149955 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.263985 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.291029 4751 scope.go:117] "RemoveContainer" containerID="b5aac7d6f497e2328bb417b25d63cf92f6851dadf3db5e57dc476e250917965b" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.307858 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.341015 4751 scope.go:117] "RemoveContainer" containerID="645f73160e3f56e7ed531a836f7c9a1561da7bc5259b4db0442d52545c4d2302" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.354575 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-ovsdbserver-nb\") pod \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.354636 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-ovsdbserver-sb\") pod \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.354816 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-dns-swift-storage-0\") pod \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.354950 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-dns-svc\") pod \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.354970 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-config\") pod \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.354989 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4z8f\" (UniqueName: \"kubernetes.io/projected/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-kube-api-access-z4z8f\") pod \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.365437 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.378020 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-kube-api-access-z4z8f" (OuterVolumeSpecName: "kube-api-access-z4z8f") pod "5a85ff98-c5c9-4735-ad9d-3c987976bd2f" (UID: "5a85ff98-c5c9-4735-ad9d-3c987976bd2f"). InnerVolumeSpecName "kube-api-access-z4z8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.448506 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.453664 4751 scope.go:117] "RemoveContainer" containerID="57c96884c7e80a6d477536792bb89b73f1542edc832a38be9d3a266693f347ec" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.460501 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-combined-ca-bundle\") pod \"11052d78-74b6-472a-aaba-513368f51ce3\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.460676 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-ovndb-tls-certs\") pod \"11052d78-74b6-472a-aaba-513368f51ce3\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.460779 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-config\") pod \"11052d78-74b6-472a-aaba-513368f51ce3\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.460842 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29d9w\" (UniqueName: \"kubernetes.io/projected/11052d78-74b6-472a-aaba-513368f51ce3-kube-api-access-29d9w\") pod \"11052d78-74b6-472a-aaba-513368f51ce3\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.460883 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-httpd-config\") pod \"11052d78-74b6-472a-aaba-513368f51ce3\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.461487 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4z8f\" (UniqueName: \"kubernetes.io/projected/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-kube-api-access-z4z8f\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.476628 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:39:23 crc kubenswrapper[4751]: E0130 21:39:23.477065 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="ceilometer-central-agent" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477078 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="ceilometer-central-agent" Jan 30 21:39:23 crc kubenswrapper[4751]: E0130 21:39:23.477094 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a85ff98-c5c9-4735-ad9d-3c987976bd2f" containerName="init" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477100 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a85ff98-c5c9-4735-ad9d-3c987976bd2f" containerName="init" Jan 30 21:39:23 crc kubenswrapper[4751]: E0130 21:39:23.477114 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11052d78-74b6-472a-aaba-513368f51ce3" containerName="neutron-httpd" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477120 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="11052d78-74b6-472a-aaba-513368f51ce3" containerName="neutron-httpd" Jan 30 21:39:23 crc kubenswrapper[4751]: E0130 21:39:23.477239 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11052d78-74b6-472a-aaba-513368f51ce3" containerName="neutron-api" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477246 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="11052d78-74b6-472a-aaba-513368f51ce3" containerName="neutron-api" Jan 30 21:39:23 crc kubenswrapper[4751]: E0130 21:39:23.477257 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8312bae-69c5-4c31-844e-42a90c18bfd3" containerName="mariadb-account-create-update" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477263 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8312bae-69c5-4c31-844e-42a90c18bfd3" containerName="mariadb-account-create-update" Jan 30 21:39:23 crc kubenswrapper[4751]: E0130 21:39:23.477279 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f139e0b-3ae5-4d5c-aa87-f15d00373f98" containerName="mariadb-database-create" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477284 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f139e0b-3ae5-4d5c-aa87-f15d00373f98" containerName="mariadb-database-create" Jan 30 21:39:23 crc kubenswrapper[4751]: E0130 21:39:23.477293 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a85ff98-c5c9-4735-ad9d-3c987976bd2f" containerName="dnsmasq-dns" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477299 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a85ff98-c5c9-4735-ad9d-3c987976bd2f" containerName="dnsmasq-dns" Jan 30 21:39:23 crc kubenswrapper[4751]: E0130 21:39:23.477318 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="proxy-httpd" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477336 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="proxy-httpd" Jan 30 21:39:23 crc kubenswrapper[4751]: E0130 21:39:23.477352 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5" containerName="mariadb-account-create-update" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477358 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5" containerName="mariadb-account-create-update" Jan 30 21:39:23 crc kubenswrapper[4751]: E0130 21:39:23.477375 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444e34d6-7904-405b-956e-d23aed56537e" containerName="mariadb-database-create" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477381 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="444e34d6-7904-405b-956e-d23aed56537e" containerName="mariadb-database-create" Jan 30 21:39:23 crc kubenswrapper[4751]: E0130 21:39:23.477391 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="ceilometer-notification-agent" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477396 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="ceilometer-notification-agent" Jan 30 21:39:23 crc kubenswrapper[4751]: E0130 21:39:23.477410 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="sg-core" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477417 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="sg-core" Jan 30 21:39:23 crc kubenswrapper[4751]: E0130 21:39:23.477430 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a169fb7b-bcf8-44d8-8942-a42a4de6001d" containerName="mariadb-database-create" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477435 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a169fb7b-bcf8-44d8-8942-a42a4de6001d" containerName="mariadb-database-create" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477616 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="11052d78-74b6-472a-aaba-513368f51ce3" containerName="neutron-httpd" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477632 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="proxy-httpd" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477641 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="11052d78-74b6-472a-aaba-513368f51ce3" containerName="neutron-api" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477653 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5" containerName="mariadb-account-create-update" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477661 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a169fb7b-bcf8-44d8-8942-a42a4de6001d" containerName="mariadb-database-create" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477675 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="sg-core" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477684 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="444e34d6-7904-405b-956e-d23aed56537e" containerName="mariadb-database-create" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477690 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="ceilometer-notification-agent" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477699 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="ceilometer-central-agent" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477711 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8312bae-69c5-4c31-844e-42a90c18bfd3" containerName="mariadb-account-create-update" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477721 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f139e0b-3ae5-4d5c-aa87-f15d00373f98" containerName="mariadb-database-create" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477730 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a85ff98-c5c9-4735-ad9d-3c987976bd2f" containerName="dnsmasq-dns" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.479741 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.484179 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.485652 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.559154 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5a85ff98-c5c9-4735-ad9d-3c987976bd2f" (UID: "5a85ff98-c5c9-4735-ad9d-3c987976bd2f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.575942 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11052d78-74b6-472a-aaba-513368f51ce3-kube-api-access-29d9w" (OuterVolumeSpecName: "kube-api-access-29d9w") pod "11052d78-74b6-472a-aaba-513368f51ce3" (UID: "11052d78-74b6-472a-aaba-513368f51ce3"). InnerVolumeSpecName "kube-api-access-29d9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.576982 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.577462 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5a85ff98-c5c9-4735-ad9d-3c987976bd2f" (UID: "5a85ff98-c5c9-4735-ad9d-3c987976bd2f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.578665 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "11052d78-74b6-472a-aaba-513368f51ce3" (UID: "11052d78-74b6-472a-aaba-513368f51ce3"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.578728 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5a85ff98-c5c9-4735-ad9d-3c987976bd2f" (UID: "5a85ff98-c5c9-4735-ad9d-3c987976bd2f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.610839 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-d9fcd4c7f-gcp2z"] Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.629244 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqkxn\" (UniqueName: \"kubernetes.io/projected/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-kube-api-access-dqkxn\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.643722 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-run-httpd\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.643903 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.645679 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-log-httpd\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.648527 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.648590 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-scripts\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.648634 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-config-data\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.648840 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.648854 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29d9w\" (UniqueName: \"kubernetes.io/projected/11052d78-74b6-472a-aaba-513368f51ce3-kube-api-access-29d9w\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.648865 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.648879 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.648890 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.664109 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-84f9b8dd8f-qtmlz"] Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.684256 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-d6c877d68-9ktwv"] Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.701152 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6f4bd4b69-ntk8n"] Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.713836 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6ffb596769-rgv47"] Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.747658 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5a85ff98-c5c9-4735-ad9d-3c987976bd2f" (UID: "5a85ff98-c5c9-4735-ad9d-3c987976bd2f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.750442 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqkxn\" (UniqueName: \"kubernetes.io/projected/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-kube-api-access-dqkxn\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.750542 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-run-httpd\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.750601 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.750642 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-log-httpd\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.750661 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.750697 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-scripts\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.750728 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-config-data\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.751380 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.752299 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-run-httpd\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.752859 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-log-httpd\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.771625 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-config-data\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.772381 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.772384 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.784198 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-scripts\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.835978 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqkxn\" (UniqueName: \"kubernetes.io/projected/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-kube-api-access-dqkxn\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.858005 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-config" (OuterVolumeSpecName: "config") pod "11052d78-74b6-472a-aaba-513368f51ce3" (UID: "11052d78-74b6-472a-aaba-513368f51ce3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.891832 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11052d78-74b6-472a-aaba-513368f51ce3" (UID: "11052d78-74b6-472a-aaba-513368f51ce3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.903516 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-config" (OuterVolumeSpecName: "config") pod "5a85ff98-c5c9-4735-ad9d-3c987976bd2f" (UID: "5a85ff98-c5c9-4735-ad9d-3c987976bd2f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.943985 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.962642 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.962674 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.962686 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.978739 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "11052d78-74b6-472a-aaba-513368f51ce3" (UID: "11052d78-74b6-472a-aaba-513368f51ce3"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.992158 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" path="/var/lib/kubelet/pods/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca/volumes" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.064661 4751 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.156969 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.157012 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.157055 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.157888 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.157944 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" gracePeriod=600 Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.167143 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f4bd4b69-ntk8n" event={"ID":"43d36aef-fb14-4701-8931-9aaa96d049a9","Type":"ContainerStarted","Data":"c0307f0807d895bc4c4c81ee028a2f34849a32fd2400b791f772ab65d779a108"} Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.173932 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" event={"ID":"191c5874-d3f0-4a2b-adcf-8ceed228e459","Type":"ContainerStarted","Data":"1364dfb35f78bd1c1c6c4e97299ac2e166c205513eddfbb1858b9264a7b65646"} Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.208736 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-84f9b8dd8f-qtmlz" event={"ID":"8f68fda0-5c9c-46c2-82b3-633695c6e6f4","Type":"ContainerStarted","Data":"11e830bec54790dfa3e122cbf706b934f604f7df0b29532d7a71f254e054bb5e"} Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.242250 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6ffb596769-rgv47" event={"ID":"826635d2-0549-4d63-84e2-3ba3cdf85db4","Type":"ContainerStarted","Data":"f7aa37517cca46d92e89c8de2e90e7c2581e4c73761a48cd01499fa3a6ce3c18"} Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.254165 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-566dccff6-ddvxf" event={"ID":"11052d78-74b6-472a-aaba-513368f51ce3","Type":"ContainerDied","Data":"bff81fd2907d366a655d26ebdf3a255c3bffa93ae91269d7fa674f369fb98f34"} Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.254221 4751 scope.go:117] "RemoveContainer" containerID="74aefce86656a68e812b38f7658b2359076b62078ffd0f3974807d58363f94b0" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.254313 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.256336 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d6c877d68-9ktwv" event={"ID":"8a808a38-f939-4b4f-8386-e177712737d6","Type":"ContainerStarted","Data":"499d3637c3e03f2b7dc0a86e62ae72f328746856d2c5b4b97226255304ddbec8"} Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.297572 4751 scope.go:117] "RemoveContainer" containerID="2ba96d5744b69d3f9276be5b0e9715862e0020d80034d236fcecc5d9420b54cc" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.313956 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-cf77776d-s5nbq" event={"ID":"7782d459-57bc-442e-a471-6c5839d6de47","Type":"ContainerStarted","Data":"674993f7c67c83b7a009edf32ccbcc46246beac25e1720e6fddf8537fcf7e8aa"} Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.314103 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-cf77776d-s5nbq" podUID="7782d459-57bc-442e-a471-6c5839d6de47" containerName="heat-api" containerID="cri-o://674993f7c67c83b7a009edf32ccbcc46246beac25e1720e6fddf8537fcf7e8aa" gracePeriod=60 Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.314318 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.325842 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.330236 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-58dc6df599-nmmxw" podUID="b9f02a32-18ed-4030-94d6-16f4d0feff52" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.346352 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-47sz5"] Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.349688 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.355734 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.356345 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5j6gn" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.358618 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-47sz5"] Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.383310 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-566dccff6-ddvxf"] Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.400712 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-566dccff6-ddvxf"] Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.401415 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.404301 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-cf77776d-s5nbq" podStartSLOduration=4.491048518 podStartE2EDuration="18.404283226s" podCreationTimestamp="2026-01-30 21:39:06 +0000 UTC" firstStartedPulling="2026-01-30 21:39:07.964215272 +0000 UTC m=+1486.710037921" lastFinishedPulling="2026-01-30 21:39:21.87744998 +0000 UTC m=+1500.623272629" observedRunningTime="2026-01-30 21:39:24.339397748 +0000 UTC m=+1503.085220397" watchObservedRunningTime="2026-01-30 21:39:24.404283226 +0000 UTC m=+1503.150105875" Jan 30 21:39:24 crc kubenswrapper[4751]: E0130 21:39:24.432207 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.484767 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzpw7\" (UniqueName: \"kubernetes.io/projected/551aecfb-7969-4644-ac50-b8f4c63002d3-kube-api-access-fzpw7\") pod \"nova-cell0-conductor-db-sync-47sz5\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.485015 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-config-data\") pod \"nova-cell0-conductor-db-sync-47sz5\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.485039 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-47sz5\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.485117 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-scripts\") pod \"nova-cell0-conductor-db-sync-47sz5\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.540437 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-hb44m"] Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.550375 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-hb44m"] Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.587662 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzpw7\" (UniqueName: \"kubernetes.io/projected/551aecfb-7969-4644-ac50-b8f4c63002d3-kube-api-access-fzpw7\") pod \"nova-cell0-conductor-db-sync-47sz5\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.587722 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-config-data\") pod \"nova-cell0-conductor-db-sync-47sz5\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.587750 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-47sz5\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.587811 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-scripts\") pod \"nova-cell0-conductor-db-sync-47sz5\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.596584 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-scripts\") pod \"nova-cell0-conductor-db-sync-47sz5\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.599297 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-config-data\") pod \"nova-cell0-conductor-db-sync-47sz5\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.604827 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-47sz5\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.609029 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzpw7\" (UniqueName: \"kubernetes.io/projected/551aecfb-7969-4644-ac50-b8f4c63002d3-kube-api-access-fzpw7\") pod \"nova-cell0-conductor-db-sync-47sz5\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.699060 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.804667 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-58dc6df599-nmmxw" podUID="b9f02a32-18ed-4030-94d6-16f4d0feff52" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.815877 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-58dc6df599-nmmxw" podUID="b9f02a32-18ed-4030-94d6-16f4d0feff52" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.819779 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.291524 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-47sz5"] Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.362359 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.489948 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" exitCode=0 Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.490037 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88"} Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.490100 4751 scope.go:117] "RemoveContainer" containerID="589b659983c64eaeb9431668de4131b84f85d7d4aaf79c3e0b75a24b0812e09e" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.491005 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:39:25 crc kubenswrapper[4751]: E0130 21:39:25.491399 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.492058 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-84f9b8dd8f-qtmlz" event={"ID":"8f68fda0-5c9c-46c2-82b3-633695c6e6f4","Type":"ContainerStarted","Data":"db89466ab47a413bfa604263bee5eb8c78e018c63643ceaff0f27f62ee26e928"} Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.492122 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.494058 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d6c877d68-9ktwv" event={"ID":"8a808a38-f939-4b4f-8386-e177712737d6","Type":"ContainerStarted","Data":"702678d6125a2ef911b38b5fcb8c725d8c871b8257728962b6a494f07ee762d0"} Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.494225 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.495747 4751 generic.go:334] "Generic (PLEG): container finished" podID="7782d459-57bc-442e-a471-6c5839d6de47" containerID="674993f7c67c83b7a009edf32ccbcc46246beac25e1720e6fddf8537fcf7e8aa" exitCode=0 Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.495844 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.496624 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-cf77776d-s5nbq" event={"ID":"7782d459-57bc-442e-a471-6c5839d6de47","Type":"ContainerDied","Data":"674993f7c67c83b7a009edf32ccbcc46246beac25e1720e6fddf8537fcf7e8aa"} Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.498516 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f4bd4b69-ntk8n" event={"ID":"43d36aef-fb14-4701-8931-9aaa96d049a9","Type":"ContainerStarted","Data":"f4a9281bbfdd290c0c4cad13b45f8bae7e6a12cff0d866a3bc02118e3db003a9"} Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.499420 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.502138 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" event={"ID":"191c5874-d3f0-4a2b-adcf-8ceed228e459","Type":"ContainerStarted","Data":"11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba"} Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.502980 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.510078 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511","Type":"ContainerStarted","Data":"4884de2c448f76056d8317537ec7b098481217936fd9e2f04eb669de3c631faf"} Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.527257 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-config-data-custom\") pod \"7782d459-57bc-442e-a471-6c5839d6de47\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.527388 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-config-data\") pod \"7782d459-57bc-442e-a471-6c5839d6de47\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.527564 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-combined-ca-bundle\") pod \"7782d459-57bc-442e-a471-6c5839d6de47\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.527606 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwhz4\" (UniqueName: \"kubernetes.io/projected/7782d459-57bc-442e-a471-6c5839d6de47-kube-api-access-vwhz4\") pod \"7782d459-57bc-442e-a471-6c5839d6de47\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.528683 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6ffb596769-rgv47" event={"ID":"826635d2-0549-4d63-84e2-3ba3cdf85db4","Type":"ContainerStarted","Data":"bcfa1a5e063be4d843f69be862bdf4198ba058b09d410c418f13e640ec3964ee"} Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.529071 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.536049 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7782d459-57bc-442e-a471-6c5839d6de47-kube-api-access-vwhz4" (OuterVolumeSpecName: "kube-api-access-vwhz4") pod "7782d459-57bc-442e-a471-6c5839d6de47" (UID: "7782d459-57bc-442e-a471-6c5839d6de47"). InnerVolumeSpecName "kube-api-access-vwhz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.541546 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-47sz5" event={"ID":"551aecfb-7969-4644-ac50-b8f4c63002d3","Type":"ContainerStarted","Data":"ee0d949c9abfb18e45a0aba7521f0154b8bf9089739c141933f8355d900aff65"} Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.550008 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7782d459-57bc-442e-a471-6c5839d6de47" (UID: "7782d459-57bc-442e-a471-6c5839d6de47"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.552454 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.552514 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwhz4\" (UniqueName: \"kubernetes.io/projected/7782d459-57bc-442e-a471-6c5839d6de47-kube-api-access-vwhz4\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.590931 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-84f9b8dd8f-qtmlz" podStartSLOduration=10.590906117 podStartE2EDuration="10.590906117s" podCreationTimestamp="2026-01-30 21:39:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:39:25.56152023 +0000 UTC m=+1504.307342879" watchObservedRunningTime="2026-01-30 21:39:25.590906117 +0000 UTC m=+1504.336728766" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.640368 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7782d459-57bc-442e-a471-6c5839d6de47" (UID: "7782d459-57bc-442e-a471-6c5839d6de47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.660593 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.674085 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6f4bd4b69-ntk8n" podStartSLOduration=8.674065084 podStartE2EDuration="8.674065084s" podCreationTimestamp="2026-01-30 21:39:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:39:25.628971226 +0000 UTC m=+1504.374793875" watchObservedRunningTime="2026-01-30 21:39:25.674065084 +0000 UTC m=+1504.419887733" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.706626 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-config-data" (OuterVolumeSpecName: "config-data") pod "7782d459-57bc-442e-a471-6c5839d6de47" (UID: "7782d459-57bc-442e-a471-6c5839d6de47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.776980 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.792814 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" podStartSLOduration=10.792796914 podStartE2EDuration="10.792796914s" podCreationTimestamp="2026-01-30 21:39:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:39:25.680579789 +0000 UTC m=+1504.426402438" watchObservedRunningTime="2026-01-30 21:39:25.792796914 +0000 UTC m=+1504.538619563" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.818361 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-d6c877d68-9ktwv" podStartSLOduration=8.818309348 podStartE2EDuration="8.818309348s" podCreationTimestamp="2026-01-30 21:39:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:39:25.757656363 +0000 UTC m=+1504.503479012" watchObservedRunningTime="2026-01-30 21:39:25.818309348 +0000 UTC m=+1504.564131997" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.903261 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6ffb596769-rgv47" podStartSLOduration=10.903240002 podStartE2EDuration="10.903240002s" podCreationTimestamp="2026-01-30 21:39:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:39:25.817772672 +0000 UTC m=+1504.563595321" watchObservedRunningTime="2026-01-30 21:39:25.903240002 +0000 UTC m=+1504.649062651" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.941381 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-cf77776d-s5nbq"] Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.970630 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-cf77776d-s5nbq"] Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.994106 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11052d78-74b6-472a-aaba-513368f51ce3" path="/var/lib/kubelet/pods/11052d78-74b6-472a-aaba-513368f51ce3/volumes" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.994893 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a85ff98-c5c9-4735-ad9d-3c987976bd2f" path="/var/lib/kubelet/pods/5a85ff98-c5c9-4735-ad9d-3c987976bd2f/volumes" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.995756 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7782d459-57bc-442e-a471-6c5839d6de47" path="/var/lib/kubelet/pods/7782d459-57bc-442e-a471-6c5839d6de47/volumes" Jan 30 21:39:26 crc kubenswrapper[4751]: E0130 21:39:26.174237 4751 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f68fda0_5c9c_46c2_82b3_633695c6e6f4.slice/crio-db89466ab47a413bfa604263bee5eb8c78e018c63643ceaff0f27f62ee26e928.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7782d459_57bc_442e_a471_6c5839d6de47.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7782d459_57bc_442e_a471_6c5839d6de47.slice/crio-0cdf7ad51a70ca5bfb45057c9c67833605be258614b207239214e50e206ee1c5\": RecentStats: unable to find data in memory cache]" Jan 30 21:39:26 crc kubenswrapper[4751]: I0130 21:39:26.551490 4751 generic.go:334] "Generic (PLEG): container finished" podID="8f68fda0-5c9c-46c2-82b3-633695c6e6f4" containerID="db89466ab47a413bfa604263bee5eb8c78e018c63643ceaff0f27f62ee26e928" exitCode=1 Jan 30 21:39:26 crc kubenswrapper[4751]: I0130 21:39:26.551783 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-84f9b8dd8f-qtmlz" event={"ID":"8f68fda0-5c9c-46c2-82b3-633695c6e6f4","Type":"ContainerDied","Data":"db89466ab47a413bfa604263bee5eb8c78e018c63643ceaff0f27f62ee26e928"} Jan 30 21:39:26 crc kubenswrapper[4751]: I0130 21:39:26.552513 4751 scope.go:117] "RemoveContainer" containerID="db89466ab47a413bfa604263bee5eb8c78e018c63643ceaff0f27f62ee26e928" Jan 30 21:39:27 crc kubenswrapper[4751]: I0130 21:39:27.206832 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:27 crc kubenswrapper[4751]: I0130 21:39:27.403233 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-586n4" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="registry-server" probeResult="failure" output=< Jan 30 21:39:27 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:39:27 crc kubenswrapper[4751]: > Jan 30 21:39:27 crc kubenswrapper[4751]: I0130 21:39:27.566773 4751 generic.go:334] "Generic (PLEG): container finished" podID="826635d2-0549-4d63-84e2-3ba3cdf85db4" containerID="bcfa1a5e063be4d843f69be862bdf4198ba058b09d410c418f13e640ec3964ee" exitCode=1 Jan 30 21:39:27 crc kubenswrapper[4751]: I0130 21:39:27.566914 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6ffb596769-rgv47" event={"ID":"826635d2-0549-4d63-84e2-3ba3cdf85db4","Type":"ContainerDied","Data":"bcfa1a5e063be4d843f69be862bdf4198ba058b09d410c418f13e640ec3964ee"} Jan 30 21:39:27 crc kubenswrapper[4751]: I0130 21:39:27.567975 4751 scope.go:117] "RemoveContainer" containerID="bcfa1a5e063be4d843f69be862bdf4198ba058b09d410c418f13e640ec3964ee" Jan 30 21:39:27 crc kubenswrapper[4751]: I0130 21:39:27.816862 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-58dc6df599-nmmxw" podUID="b9f02a32-18ed-4030-94d6-16f4d0feff52" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 21:39:27 crc kubenswrapper[4751]: I0130 21:39:27.817582 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-58dc6df599-nmmxw" podUID="b9f02a32-18ed-4030-94d6-16f4d0feff52" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 21:39:29 crc kubenswrapper[4751]: I0130 21:39:29.247175 4751 scope.go:117] "RemoveContainer" containerID="674993f7c67c83b7a009edf32ccbcc46246beac25e1720e6fddf8537fcf7e8aa" Jan 30 21:39:29 crc kubenswrapper[4751]: I0130 21:39:29.274990 4751 scope.go:117] "RemoveContainer" containerID="674993f7c67c83b7a009edf32ccbcc46246beac25e1720e6fddf8537fcf7e8aa" Jan 30 21:39:29 crc kubenswrapper[4751]: E0130 21:39:29.275721 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"674993f7c67c83b7a009edf32ccbcc46246beac25e1720e6fddf8537fcf7e8aa\": container with ID starting with 674993f7c67c83b7a009edf32ccbcc46246beac25e1720e6fddf8537fcf7e8aa not found: ID does not exist" containerID="674993f7c67c83b7a009edf32ccbcc46246beac25e1720e6fddf8537fcf7e8aa" Jan 30 21:39:29 crc kubenswrapper[4751]: I0130 21:39:29.275785 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"674993f7c67c83b7a009edf32ccbcc46246beac25e1720e6fddf8537fcf7e8aa"} err="failed to get container status \"674993f7c67c83b7a009edf32ccbcc46246beac25e1720e6fddf8537fcf7e8aa\": rpc error: code = NotFound desc = could not find container \"674993f7c67c83b7a009edf32ccbcc46246beac25e1720e6fddf8537fcf7e8aa\": container with ID starting with 674993f7c67c83b7a009edf32ccbcc46246beac25e1720e6fddf8537fcf7e8aa not found: ID does not exist" Jan 30 21:39:29 crc kubenswrapper[4751]: I0130 21:39:29.819002 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:29 crc kubenswrapper[4751]: I0130 21:39:29.827864 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:30 crc kubenswrapper[4751]: I0130 21:39:30.535004 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:30 crc kubenswrapper[4751]: I0130 21:39:30.579099 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:30 crc kubenswrapper[4751]: I0130 21:39:30.636097 4751 generic.go:334] "Generic (PLEG): container finished" podID="8f68fda0-5c9c-46c2-82b3-633695c6e6f4" containerID="3f3a7ffb288bc415dc5b59baa7fb9c6b68e00f52800d342ad995dbc272c7f1bb" exitCode=1 Jan 30 21:39:30 crc kubenswrapper[4751]: I0130 21:39:30.636196 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-84f9b8dd8f-qtmlz" event={"ID":"8f68fda0-5c9c-46c2-82b3-633695c6e6f4","Type":"ContainerDied","Data":"3f3a7ffb288bc415dc5b59baa7fb9c6b68e00f52800d342ad995dbc272c7f1bb"} Jan 30 21:39:30 crc kubenswrapper[4751]: I0130 21:39:30.636232 4751 scope.go:117] "RemoveContainer" containerID="db89466ab47a413bfa604263bee5eb8c78e018c63643ceaff0f27f62ee26e928" Jan 30 21:39:30 crc kubenswrapper[4751]: I0130 21:39:30.637053 4751 scope.go:117] "RemoveContainer" containerID="3f3a7ffb288bc415dc5b59baa7fb9c6b68e00f52800d342ad995dbc272c7f1bb" Jan 30 21:39:30 crc kubenswrapper[4751]: E0130 21:39:30.637578 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-84f9b8dd8f-qtmlz_openstack(8f68fda0-5c9c-46c2-82b3-633695c6e6f4)\"" pod="openstack/heat-api-84f9b8dd8f-qtmlz" podUID="8f68fda0-5c9c-46c2-82b3-633695c6e6f4" Jan 30 21:39:30 crc kubenswrapper[4751]: I0130 21:39:30.656190 4751 generic.go:334] "Generic (PLEG): container finished" podID="826635d2-0549-4d63-84e2-3ba3cdf85db4" containerID="cf22f8655eccc32aa59cef7b29c129725319b4f4f4da7c51fdef15993d0d2382" exitCode=1 Jan 30 21:39:30 crc kubenswrapper[4751]: I0130 21:39:30.656271 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6ffb596769-rgv47" event={"ID":"826635d2-0549-4d63-84e2-3ba3cdf85db4","Type":"ContainerDied","Data":"cf22f8655eccc32aa59cef7b29c129725319b4f4f4da7c51fdef15993d0d2382"} Jan 30 21:39:30 crc kubenswrapper[4751]: I0130 21:39:30.657024 4751 scope.go:117] "RemoveContainer" containerID="cf22f8655eccc32aa59cef7b29c129725319b4f4f4da7c51fdef15993d0d2382" Jan 30 21:39:30 crc kubenswrapper[4751]: E0130 21:39:30.657354 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6ffb596769-rgv47_openstack(826635d2-0549-4d63-84e2-3ba3cdf85db4)\"" pod="openstack/heat-cfnapi-6ffb596769-rgv47" podUID="826635d2-0549-4d63-84e2-3ba3cdf85db4" Jan 30 21:39:30 crc kubenswrapper[4751]: I0130 21:39:30.661443 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511","Type":"ContainerStarted","Data":"cfdea75b80256723e6e5c7537ac03523b96b0f4ab2bf0621af6d62950a93c5b8"} Jan 30 21:39:30 crc kubenswrapper[4751]: I0130 21:39:30.733731 4751 scope.go:117] "RemoveContainer" containerID="bcfa1a5e063be4d843f69be862bdf4198ba058b09d410c418f13e640ec3964ee" Jan 30 21:39:31 crc kubenswrapper[4751]: I0130 21:39:31.677415 4751 scope.go:117] "RemoveContainer" containerID="3f3a7ffb288bc415dc5b59baa7fb9c6b68e00f52800d342ad995dbc272c7f1bb" Jan 30 21:39:31 crc kubenswrapper[4751]: E0130 21:39:31.677971 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-84f9b8dd8f-qtmlz_openstack(8f68fda0-5c9c-46c2-82b3-633695c6e6f4)\"" pod="openstack/heat-api-84f9b8dd8f-qtmlz" podUID="8f68fda0-5c9c-46c2-82b3-633695c6e6f4" Jan 30 21:39:31 crc kubenswrapper[4751]: I0130 21:39:31.681285 4751 scope.go:117] "RemoveContainer" containerID="cf22f8655eccc32aa59cef7b29c129725319b4f4f4da7c51fdef15993d0d2382" Jan 30 21:39:31 crc kubenswrapper[4751]: E0130 21:39:31.681547 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6ffb596769-rgv47_openstack(826635d2-0549-4d63-84e2-3ba3cdf85db4)\"" pod="openstack/heat-cfnapi-6ffb596769-rgv47" podUID="826635d2-0549-4d63-84e2-3ba3cdf85db4" Jan 30 21:39:31 crc kubenswrapper[4751]: I0130 21:39:31.685931 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511","Type":"ContainerStarted","Data":"9c01cf6df6cfdacc48a6527bc7f77429e8ab33e1ed7d506f200e6257569ad93c"} Jan 30 21:39:31 crc kubenswrapper[4751]: I0130 21:39:31.933839 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:39:32 crc kubenswrapper[4751]: I0130 21:39:32.708830 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511","Type":"ContainerStarted","Data":"6d06091377b2a8fc82610799f2cf8764b0bd657f28f4bab21ad30e6eb045d8d2"} Jan 30 21:39:33 crc kubenswrapper[4751]: I0130 21:39:33.746731 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:34 crc kubenswrapper[4751]: I0130 21:39:34.944490 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:35 crc kubenswrapper[4751]: I0130 21:39:35.017159 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-84f9b8dd8f-qtmlz"] Jan 30 21:39:35 crc kubenswrapper[4751]: I0130 21:39:35.293665 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:35 crc kubenswrapper[4751]: I0130 21:39:35.367027 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6ffb596769-rgv47"] Jan 30 21:39:35 crc kubenswrapper[4751]: I0130 21:39:35.534395 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:35 crc kubenswrapper[4751]: I0130 21:39:35.571155 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:35 crc kubenswrapper[4751]: I0130 21:39:35.579728 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:35 crc kubenswrapper[4751]: I0130 21:39:35.635836 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6c448464db-8pmrl"] Jan 30 21:39:35 crc kubenswrapper[4751]: I0130 21:39:35.636052 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-6c448464db-8pmrl" podUID="7fea9c34-deff-4930-87b5-c697eb7831d8" containerName="heat-engine" containerID="cri-o://4b306988a2380cbe1d94c21ad11cef6733288cda2590ed761b989755ac079478" gracePeriod=60 Jan 30 21:39:36 crc kubenswrapper[4751]: I0130 21:39:36.527969 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:39:36 crc kubenswrapper[4751]: I0130 21:39:36.679394 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:39:36 crc kubenswrapper[4751]: E0130 21:39:36.712526 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4b306988a2380cbe1d94c21ad11cef6733288cda2590ed761b989755ac079478" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 21:39:36 crc kubenswrapper[4751]: E0130 21:39:36.716976 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4b306988a2380cbe1d94c21ad11cef6733288cda2590ed761b989755ac079478" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 21:39:36 crc kubenswrapper[4751]: E0130 21:39:36.718965 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4b306988a2380cbe1d94c21ad11cef6733288cda2590ed761b989755ac079478" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 21:39:36 crc kubenswrapper[4751]: E0130 21:39:36.719036 4751 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-6c448464db-8pmrl" podUID="7fea9c34-deff-4930-87b5-c697eb7831d8" containerName="heat-engine" Jan 30 21:39:36 crc kubenswrapper[4751]: I0130 21:39:36.777155 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-586n4"] Jan 30 21:39:37 crc kubenswrapper[4751]: I0130 21:39:37.815676 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-586n4" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="registry-server" containerID="cri-o://21ac36001cb714817d8ab855578743e4c5c5ddbfabc891012d0c87386994da9f" gracePeriod=2 Jan 30 21:39:38 crc kubenswrapper[4751]: I0130 21:39:38.828653 4751 generic.go:334] "Generic (PLEG): container finished" podID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerID="21ac36001cb714817d8ab855578743e4c5c5ddbfabc891012d0c87386994da9f" exitCode=0 Jan 30 21:39:38 crc kubenswrapper[4751]: I0130 21:39:38.828732 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-586n4" event={"ID":"f42767ff-b1d3-49e9-8b8d-39c65ea98978","Type":"ContainerDied","Data":"21ac36001cb714817d8ab855578743e4c5c5ddbfabc891012d0c87386994da9f"} Jan 30 21:39:39 crc kubenswrapper[4751]: I0130 21:39:39.976713 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:39:39 crc kubenswrapper[4751]: E0130 21:39:39.977047 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:39:41 crc kubenswrapper[4751]: I0130 21:39:41.890116 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6ffb596769-rgv47" event={"ID":"826635d2-0549-4d63-84e2-3ba3cdf85db4","Type":"ContainerDied","Data":"f7aa37517cca46d92e89c8de2e90e7c2581e4c73761a48cd01499fa3a6ce3c18"} Jan 30 21:39:41 crc kubenswrapper[4751]: I0130 21:39:41.890782 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7aa37517cca46d92e89c8de2e90e7c2581e4c73761a48cd01499fa3a6ce3c18" Jan 30 21:39:41 crc kubenswrapper[4751]: I0130 21:39:41.895156 4751 generic.go:334] "Generic (PLEG): container finished" podID="7fea9c34-deff-4930-87b5-c697eb7831d8" containerID="4b306988a2380cbe1d94c21ad11cef6733288cda2590ed761b989755ac079478" exitCode=0 Jan 30 21:39:41 crc kubenswrapper[4751]: I0130 21:39:41.895247 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6c448464db-8pmrl" event={"ID":"7fea9c34-deff-4930-87b5-c697eb7831d8","Type":"ContainerDied","Data":"4b306988a2380cbe1d94c21ad11cef6733288cda2590ed761b989755ac079478"} Jan 30 21:39:41 crc kubenswrapper[4751]: I0130 21:39:41.898570 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-84f9b8dd8f-qtmlz" event={"ID":"8f68fda0-5c9c-46c2-82b3-633695c6e6f4","Type":"ContainerDied","Data":"11e830bec54790dfa3e122cbf706b934f604f7df0b29532d7a71f254e054bb5e"} Jan 30 21:39:41 crc kubenswrapper[4751]: I0130 21:39:41.898611 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11e830bec54790dfa3e122cbf706b934f604f7df0b29532d7a71f254e054bb5e" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.250896 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.332024 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.408771 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-combined-ca-bundle\") pod \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.408840 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-config-data\") pod \"826635d2-0549-4d63-84e2-3ba3cdf85db4\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.408866 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-config-data-custom\") pod \"826635d2-0549-4d63-84e2-3ba3cdf85db4\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.408919 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-config-data-custom\") pod \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.409037 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-combined-ca-bundle\") pod \"826635d2-0549-4d63-84e2-3ba3cdf85db4\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.409064 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzjlk\" (UniqueName: \"kubernetes.io/projected/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-kube-api-access-vzjlk\") pod \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.409197 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-config-data\") pod \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.409239 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq89d\" (UniqueName: \"kubernetes.io/projected/826635d2-0549-4d63-84e2-3ba3cdf85db4-kube-api-access-zq89d\") pod \"826635d2-0549-4d63-84e2-3ba3cdf85db4\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.422474 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8f68fda0-5c9c-46c2-82b3-633695c6e6f4" (UID: "8f68fda0-5c9c-46c2-82b3-633695c6e6f4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.442357 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/826635d2-0549-4d63-84e2-3ba3cdf85db4-kube-api-access-zq89d" (OuterVolumeSpecName: "kube-api-access-zq89d") pod "826635d2-0549-4d63-84e2-3ba3cdf85db4" (UID: "826635d2-0549-4d63-84e2-3ba3cdf85db4"). InnerVolumeSpecName "kube-api-access-zq89d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.442984 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "826635d2-0549-4d63-84e2-3ba3cdf85db4" (UID: "826635d2-0549-4d63-84e2-3ba3cdf85db4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.447020 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-kube-api-access-vzjlk" (OuterVolumeSpecName: "kube-api-access-vzjlk") pod "8f68fda0-5c9c-46c2-82b3-633695c6e6f4" (UID: "8f68fda0-5c9c-46c2-82b3-633695c6e6f4"). InnerVolumeSpecName "kube-api-access-vzjlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.514257 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzjlk\" (UniqueName: \"kubernetes.io/projected/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-kube-api-access-vzjlk\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.514362 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq89d\" (UniqueName: \"kubernetes.io/projected/826635d2-0549-4d63-84e2-3ba3cdf85db4-kube-api-access-zq89d\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.514378 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.514391 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.579356 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.590717 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.628477 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "826635d2-0549-4d63-84e2-3ba3cdf85db4" (UID: "826635d2-0549-4d63-84e2-3ba3cdf85db4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.648023 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f68fda0-5c9c-46c2-82b3-633695c6e6f4" (UID: "8f68fda0-5c9c-46c2-82b3-633695c6e6f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.707755 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-config-data" (OuterVolumeSpecName: "config-data") pod "826635d2-0549-4d63-84e2-3ba3cdf85db4" (UID: "826635d2-0549-4d63-84e2-3ba3cdf85db4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.718923 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psx9g\" (UniqueName: \"kubernetes.io/projected/7fea9c34-deff-4930-87b5-c697eb7831d8-kube-api-access-psx9g\") pod \"7fea9c34-deff-4930-87b5-c697eb7831d8\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.719046 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-config-data-custom\") pod \"7fea9c34-deff-4930-87b5-c697eb7831d8\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.719226 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f42767ff-b1d3-49e9-8b8d-39c65ea98978-utilities\") pod \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\" (UID: \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.719269 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-combined-ca-bundle\") pod \"7fea9c34-deff-4930-87b5-c697eb7831d8\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.719338 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-config-data\") pod \"7fea9c34-deff-4930-87b5-c697eb7831d8\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.719415 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjpj4\" (UniqueName: \"kubernetes.io/projected/f42767ff-b1d3-49e9-8b8d-39c65ea98978-kube-api-access-wjpj4\") pod \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\" (UID: \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.719491 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f42767ff-b1d3-49e9-8b8d-39c65ea98978-catalog-content\") pod \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\" (UID: \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.720167 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.720194 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.720206 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.721316 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f42767ff-b1d3-49e9-8b8d-39c65ea98978-utilities" (OuterVolumeSpecName: "utilities") pod "f42767ff-b1d3-49e9-8b8d-39c65ea98978" (UID: "f42767ff-b1d3-49e9-8b8d-39c65ea98978"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.723469 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-config-data" (OuterVolumeSpecName: "config-data") pod "8f68fda0-5c9c-46c2-82b3-633695c6e6f4" (UID: "8f68fda0-5c9c-46c2-82b3-633695c6e6f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.724415 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fea9c34-deff-4930-87b5-c697eb7831d8-kube-api-access-psx9g" (OuterVolumeSpecName: "kube-api-access-psx9g") pod "7fea9c34-deff-4930-87b5-c697eb7831d8" (UID: "7fea9c34-deff-4930-87b5-c697eb7831d8"). InnerVolumeSpecName "kube-api-access-psx9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.726837 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7fea9c34-deff-4930-87b5-c697eb7831d8" (UID: "7fea9c34-deff-4930-87b5-c697eb7831d8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.729662 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f42767ff-b1d3-49e9-8b8d-39c65ea98978-kube-api-access-wjpj4" (OuterVolumeSpecName: "kube-api-access-wjpj4") pod "f42767ff-b1d3-49e9-8b8d-39c65ea98978" (UID: "f42767ff-b1d3-49e9-8b8d-39c65ea98978"). InnerVolumeSpecName "kube-api-access-wjpj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.769110 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fea9c34-deff-4930-87b5-c697eb7831d8" (UID: "7fea9c34-deff-4930-87b5-c697eb7831d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.795517 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-config-data" (OuterVolumeSpecName: "config-data") pod "7fea9c34-deff-4930-87b5-c697eb7831d8" (UID: "7fea9c34-deff-4930-87b5-c697eb7831d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.800504 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f42767ff-b1d3-49e9-8b8d-39c65ea98978-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f42767ff-b1d3-49e9-8b8d-39c65ea98978" (UID: "f42767ff-b1d3-49e9-8b8d-39c65ea98978"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.822346 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjpj4\" (UniqueName: \"kubernetes.io/projected/f42767ff-b1d3-49e9-8b8d-39c65ea98978-kube-api-access-wjpj4\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.822378 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f42767ff-b1d3-49e9-8b8d-39c65ea98978-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.822389 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psx9g\" (UniqueName: \"kubernetes.io/projected/7fea9c34-deff-4930-87b5-c697eb7831d8-kube-api-access-psx9g\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.822397 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.822405 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f42767ff-b1d3-49e9-8b8d-39c65ea98978-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.822414 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.822424 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.822434 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.917958 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"af93872a-62a1-407c-9932-2afb4313f457","Type":"ContainerStarted","Data":"4cffc45d3bce332d82d2f158e979f8c8bd0f529ff62b75a3b9cd0b8d62526da0"} Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.923437 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511","Type":"ContainerStarted","Data":"1fc41ef015f899a24b1533353b6987552c41d8844cf586e1769d6cc1f39a0c6a"} Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.924109 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.924037 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="proxy-httpd" containerID="cri-o://1fc41ef015f899a24b1533353b6987552c41d8844cf586e1769d6cc1f39a0c6a" gracePeriod=30 Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.924053 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="sg-core" containerID="cri-o://6d06091377b2a8fc82610799f2cf8764b0bd657f28f4bab21ad30e6eb045d8d2" gracePeriod=30 Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.924063 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="ceilometer-notification-agent" containerID="cri-o://9c01cf6df6cfdacc48a6527bc7f77429e8ab33e1ed7d506f200e6257569ad93c" gracePeriod=30 Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.923846 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="ceilometer-central-agent" containerID="cri-o://cfdea75b80256723e6e5c7537ac03523b96b0f4ab2bf0621af6d62950a93c5b8" gracePeriod=30 Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.928992 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6c448464db-8pmrl" event={"ID":"7fea9c34-deff-4930-87b5-c697eb7831d8","Type":"ContainerDied","Data":"c52a6ec70b4ea98c373beb2d768ebee9efb71cdd7d6badafe947f067a081150e"} Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.929423 4751 scope.go:117] "RemoveContainer" containerID="4b306988a2380cbe1d94c21ad11cef6733288cda2590ed761b989755ac079478" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.929393 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.949487 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.304243092 podStartE2EDuration="42.949473308s" podCreationTimestamp="2026-01-30 21:39:00 +0000 UTC" firstStartedPulling="2026-01-30 21:39:01.253644004 +0000 UTC m=+1479.999466653" lastFinishedPulling="2026-01-30 21:39:41.89887422 +0000 UTC m=+1520.644696869" observedRunningTime="2026-01-30 21:39:42.938372411 +0000 UTC m=+1521.684195060" watchObservedRunningTime="2026-01-30 21:39:42.949473308 +0000 UTC m=+1521.695295957" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.961668 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-586n4" event={"ID":"f42767ff-b1d3-49e9-8b8d-39c65ea98978","Type":"ContainerDied","Data":"e5a114a7a3f0e24e3bd57896ba3f86ef24a269e372ca019ce144df066c9e2be1"} Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.961760 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.969420 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.971679 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-47sz5" event={"ID":"551aecfb-7969-4644-ac50-b8f4c63002d3","Type":"ContainerStarted","Data":"33e12e7a910a881a922ed171c1d2a5e92dc23378252c88a8cc488f46dcc7cd9c"} Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.973078 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.985598 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.719876811 podStartE2EDuration="19.985581896s" podCreationTimestamp="2026-01-30 21:39:23 +0000 UTC" firstStartedPulling="2026-01-30 21:39:24.713649991 +0000 UTC m=+1503.459472640" lastFinishedPulling="2026-01-30 21:39:41.979355076 +0000 UTC m=+1520.725177725" observedRunningTime="2026-01-30 21:39:42.958469719 +0000 UTC m=+1521.704292368" watchObservedRunningTime="2026-01-30 21:39:42.985581896 +0000 UTC m=+1521.731404535" Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.006153 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-47sz5" podStartSLOduration=2.318406842 podStartE2EDuration="19.006133066s" podCreationTimestamp="2026-01-30 21:39:24 +0000 UTC" firstStartedPulling="2026-01-30 21:39:25.299350908 +0000 UTC m=+1504.045173557" lastFinishedPulling="2026-01-30 21:39:41.987077132 +0000 UTC m=+1520.732899781" observedRunningTime="2026-01-30 21:39:42.994318629 +0000 UTC m=+1521.740141288" watchObservedRunningTime="2026-01-30 21:39:43.006133066 +0000 UTC m=+1521.751955715" Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.056855 4751 scope.go:117] "RemoveContainer" containerID="21ac36001cb714817d8ab855578743e4c5c5ddbfabc891012d0c87386994da9f" Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.084877 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6c448464db-8pmrl"] Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.115706 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-6c448464db-8pmrl"] Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.145162 4751 scope.go:117] "RemoveContainer" containerID="25b90bc3624912ca065d76095b1602d95f2fe189d80a97579243f563f8a8fa45" Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.163273 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-586n4"] Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.219919 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-586n4"] Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.228454 4751 scope.go:117] "RemoveContainer" containerID="e023642d7e8f9f5527a83bfc616f033c2d4851bd320c9d6b4ef572caee21ef7c" Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.233876 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6ffb596769-rgv47"] Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.244389 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6ffb596769-rgv47"] Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.255318 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-84f9b8dd8f-qtmlz"] Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.315611 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-84f9b8dd8f-qtmlz"] Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.981949 4751 generic.go:334] "Generic (PLEG): container finished" podID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerID="6d06091377b2a8fc82610799f2cf8764b0bd657f28f4bab21ad30e6eb045d8d2" exitCode=2 Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.981984 4751 generic.go:334] "Generic (PLEG): container finished" podID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerID="cfdea75b80256723e6e5c7537ac03523b96b0f4ab2bf0621af6d62950a93c5b8" exitCode=0 Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.986502 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fea9c34-deff-4930-87b5-c697eb7831d8" path="/var/lib/kubelet/pods/7fea9c34-deff-4930-87b5-c697eb7831d8/volumes" Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.987064 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="826635d2-0549-4d63-84e2-3ba3cdf85db4" path="/var/lib/kubelet/pods/826635d2-0549-4d63-84e2-3ba3cdf85db4/volumes" Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.987626 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f68fda0-5c9c-46c2-82b3-633695c6e6f4" path="/var/lib/kubelet/pods/8f68fda0-5c9c-46c2-82b3-633695c6e6f4/volumes" Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.991428 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" path="/var/lib/kubelet/pods/f42767ff-b1d3-49e9-8b8d-39c65ea98978/volumes" Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.992173 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511","Type":"ContainerDied","Data":"6d06091377b2a8fc82610799f2cf8764b0bd657f28f4bab21ad30e6eb045d8d2"} Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.992205 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511","Type":"ContainerDied","Data":"cfdea75b80256723e6e5c7537ac03523b96b0f4ab2bf0621af6d62950a93c5b8"} Jan 30 21:39:45 crc kubenswrapper[4751]: I0130 21:39:45.008007 4751 generic.go:334] "Generic (PLEG): container finished" podID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerID="9c01cf6df6cfdacc48a6527bc7f77429e8ab33e1ed7d506f200e6257569ad93c" exitCode=0 Jan 30 21:39:45 crc kubenswrapper[4751]: I0130 21:39:45.008055 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511","Type":"ContainerDied","Data":"9c01cf6df6cfdacc48a6527bc7f77429e8ab33e1ed7d506f200e6257569ad93c"} Jan 30 21:39:45 crc kubenswrapper[4751]: I0130 21:39:45.119041 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:39:45 crc kubenswrapper[4751]: I0130 21:39:45.119936 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="33588f5e-9224-4dd6-b689-0651c16d06bd" containerName="glance-log" containerID="cri-o://0e6fc40159796236c1d006a538830d10bc94cb3396f193843abf8cb478b98954" gracePeriod=30 Jan 30 21:39:45 crc kubenswrapper[4751]: I0130 21:39:45.120401 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="33588f5e-9224-4dd6-b689-0651c16d06bd" containerName="glance-httpd" containerID="cri-o://404ff17c9262956b5de69cff0c330fcf3cee139543dbe993153748a7e4076c5f" gracePeriod=30 Jan 30 21:39:46 crc kubenswrapper[4751]: I0130 21:39:46.022210 4751 generic.go:334] "Generic (PLEG): container finished" podID="33588f5e-9224-4dd6-b689-0651c16d06bd" containerID="0e6fc40159796236c1d006a538830d10bc94cb3396f193843abf8cb478b98954" exitCode=143 Jan 30 21:39:46 crc kubenswrapper[4751]: I0130 21:39:46.022301 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"33588f5e-9224-4dd6-b689-0651c16d06bd","Type":"ContainerDied","Data":"0e6fc40159796236c1d006a538830d10bc94cb3396f193843abf8cb478b98954"} Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.098628 4751 generic.go:334] "Generic (PLEG): container finished" podID="33588f5e-9224-4dd6-b689-0651c16d06bd" containerID="404ff17c9262956b5de69cff0c330fcf3cee139543dbe993153748a7e4076c5f" exitCode=0 Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.099275 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"33588f5e-9224-4dd6-b689-0651c16d06bd","Type":"ContainerDied","Data":"404ff17c9262956b5de69cff0c330fcf3cee139543dbe993153748a7e4076c5f"} Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.190957 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.328145 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"33588f5e-9224-4dd6-b689-0651c16d06bd\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.328532 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfdlg\" (UniqueName: \"kubernetes.io/projected/33588f5e-9224-4dd6-b689-0651c16d06bd-kube-api-access-mfdlg\") pod \"33588f5e-9224-4dd6-b689-0651c16d06bd\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.328573 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-public-tls-certs\") pod \"33588f5e-9224-4dd6-b689-0651c16d06bd\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.328672 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-scripts\") pod \"33588f5e-9224-4dd6-b689-0651c16d06bd\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.328827 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-config-data\") pod \"33588f5e-9224-4dd6-b689-0651c16d06bd\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.328904 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33588f5e-9224-4dd6-b689-0651c16d06bd-logs\") pod \"33588f5e-9224-4dd6-b689-0651c16d06bd\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.328959 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-combined-ca-bundle\") pod \"33588f5e-9224-4dd6-b689-0651c16d06bd\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.328996 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33588f5e-9224-4dd6-b689-0651c16d06bd-httpd-run\") pod \"33588f5e-9224-4dd6-b689-0651c16d06bd\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.330052 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33588f5e-9224-4dd6-b689-0651c16d06bd-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "33588f5e-9224-4dd6-b689-0651c16d06bd" (UID: "33588f5e-9224-4dd6-b689-0651c16d06bd"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.330071 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33588f5e-9224-4dd6-b689-0651c16d06bd-logs" (OuterVolumeSpecName: "logs") pod "33588f5e-9224-4dd6-b689-0651c16d06bd" (UID: "33588f5e-9224-4dd6-b689-0651c16d06bd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.336713 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-scripts" (OuterVolumeSpecName: "scripts") pod "33588f5e-9224-4dd6-b689-0651c16d06bd" (UID: "33588f5e-9224-4dd6-b689-0651c16d06bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.370291 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33588f5e-9224-4dd6-b689-0651c16d06bd-kube-api-access-mfdlg" (OuterVolumeSpecName: "kube-api-access-mfdlg") pod "33588f5e-9224-4dd6-b689-0651c16d06bd" (UID: "33588f5e-9224-4dd6-b689-0651c16d06bd"). InnerVolumeSpecName "kube-api-access-mfdlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.408899 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a" (OuterVolumeSpecName: "glance") pod "33588f5e-9224-4dd6-b689-0651c16d06bd" (UID: "33588f5e-9224-4dd6-b689-0651c16d06bd"). InnerVolumeSpecName "pvc-03216ddc-ff0c-4c63-8e03-12380926233a". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.417700 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-config-data" (OuterVolumeSpecName: "config-data") pod "33588f5e-9224-4dd6-b689-0651c16d06bd" (UID: "33588f5e-9224-4dd6-b689-0651c16d06bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.432554 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.432590 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33588f5e-9224-4dd6-b689-0651c16d06bd-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.432601 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33588f5e-9224-4dd6-b689-0651c16d06bd-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.432630 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") on node \"crc\" " Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.432640 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfdlg\" (UniqueName: \"kubernetes.io/projected/33588f5e-9224-4dd6-b689-0651c16d06bd-kube-api-access-mfdlg\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.432649 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.445990 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "33588f5e-9224-4dd6-b689-0651c16d06bd" (UID: "33588f5e-9224-4dd6-b689-0651c16d06bd"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.452616 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33588f5e-9224-4dd6-b689-0651c16d06bd" (UID: "33588f5e-9224-4dd6-b689-0651c16d06bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.480597 4751 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.480789 4751 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-03216ddc-ff0c-4c63-8e03-12380926233a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a") on node "crc" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.535263 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.535314 4751 reconciler_common.go:293] "Volume detached for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.535348 4751 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.541281 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.541728 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="58e79616-9b52-47f9-a43e-01cbd487fbbd" containerName="glance-httpd" containerID="cri-o://38d6ea6d17555bee86d24d0120b47bfe85898a3b92da2d1783b64c466b54936d" gracePeriod=30 Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.543133 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="58e79616-9b52-47f9-a43e-01cbd487fbbd" containerName="glance-log" containerID="cri-o://63bb4deba3a7aa55abb8828c7f8386975555baf8db7e5316f55f82adf4041031" gracePeriod=30 Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.119661 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"33588f5e-9224-4dd6-b689-0651c16d06bd","Type":"ContainerDied","Data":"ddb9f9108d0450b2b505f7e37bbbf5c491b44e23c17e0903abd3c8bd376265b3"} Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.119977 4751 scope.go:117] "RemoveContainer" containerID="404ff17c9262956b5de69cff0c330fcf3cee139543dbe993153748a7e4076c5f" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.119735 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.138736 4751 generic.go:334] "Generic (PLEG): container finished" podID="58e79616-9b52-47f9-a43e-01cbd487fbbd" containerID="63bb4deba3a7aa55abb8828c7f8386975555baf8db7e5316f55f82adf4041031" exitCode=143 Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.138798 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"58e79616-9b52-47f9-a43e-01cbd487fbbd","Type":"ContainerDied","Data":"63bb4deba3a7aa55abb8828c7f8386975555baf8db7e5316f55f82adf4041031"} Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.147248 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.154617 4751 scope.go:117] "RemoveContainer" containerID="0e6fc40159796236c1d006a538830d10bc94cb3396f193843abf8cb478b98954" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.158073 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.192259 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:39:50 crc kubenswrapper[4751]: E0130 21:39:50.195981 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7782d459-57bc-442e-a471-6c5839d6de47" containerName="heat-api" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196002 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7782d459-57bc-442e-a471-6c5839d6de47" containerName="heat-api" Jan 30 21:39:50 crc kubenswrapper[4751]: E0130 21:39:50.196025 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="extract-utilities" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196031 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="extract-utilities" Jan 30 21:39:50 crc kubenswrapper[4751]: E0130 21:39:50.196043 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fea9c34-deff-4930-87b5-c697eb7831d8" containerName="heat-engine" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196050 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fea9c34-deff-4930-87b5-c697eb7831d8" containerName="heat-engine" Jan 30 21:39:50 crc kubenswrapper[4751]: E0130 21:39:50.196059 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33588f5e-9224-4dd6-b689-0651c16d06bd" containerName="glance-httpd" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196073 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="33588f5e-9224-4dd6-b689-0651c16d06bd" containerName="glance-httpd" Jan 30 21:39:50 crc kubenswrapper[4751]: E0130 21:39:50.196086 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f68fda0-5c9c-46c2-82b3-633695c6e6f4" containerName="heat-api" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196092 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f68fda0-5c9c-46c2-82b3-633695c6e6f4" containerName="heat-api" Jan 30 21:39:50 crc kubenswrapper[4751]: E0130 21:39:50.196105 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33588f5e-9224-4dd6-b689-0651c16d06bd" containerName="glance-log" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196111 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="33588f5e-9224-4dd6-b689-0651c16d06bd" containerName="glance-log" Jan 30 21:39:50 crc kubenswrapper[4751]: E0130 21:39:50.196120 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="extract-content" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196127 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="extract-content" Jan 30 21:39:50 crc kubenswrapper[4751]: E0130 21:39:50.196144 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="826635d2-0549-4d63-84e2-3ba3cdf85db4" containerName="heat-cfnapi" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196152 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="826635d2-0549-4d63-84e2-3ba3cdf85db4" containerName="heat-cfnapi" Jan 30 21:39:50 crc kubenswrapper[4751]: E0130 21:39:50.196180 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="registry-server" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196186 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="registry-server" Jan 30 21:39:50 crc kubenswrapper[4751]: E0130 21:39:50.196196 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="826635d2-0549-4d63-84e2-3ba3cdf85db4" containerName="heat-cfnapi" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196202 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="826635d2-0549-4d63-84e2-3ba3cdf85db4" containerName="heat-cfnapi" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196412 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="826635d2-0549-4d63-84e2-3ba3cdf85db4" containerName="heat-cfnapi" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196421 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7782d459-57bc-442e-a471-6c5839d6de47" containerName="heat-api" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196429 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="33588f5e-9224-4dd6-b689-0651c16d06bd" containerName="glance-httpd" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196445 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="33588f5e-9224-4dd6-b689-0651c16d06bd" containerName="glance-log" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196460 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="registry-server" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196472 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fea9c34-deff-4930-87b5-c697eb7831d8" containerName="heat-engine" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196483 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f68fda0-5c9c-46c2-82b3-633695c6e6f4" containerName="heat-api" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196489 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f68fda0-5c9c-46c2-82b3-633695c6e6f4" containerName="heat-api" Jan 30 21:39:50 crc kubenswrapper[4751]: E0130 21:39:50.196731 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f68fda0-5c9c-46c2-82b3-633695c6e6f4" containerName="heat-api" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196738 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f68fda0-5c9c-46c2-82b3-633695c6e6f4" containerName="heat-api" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196946 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="826635d2-0549-4d63-84e2-3ba3cdf85db4" containerName="heat-cfnapi" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.197789 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.207021 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.207266 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.210002 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.357171 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cef73daf-a49c-4b32-8ebc-fe0adf90df58-scripts\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.357237 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cef73daf-a49c-4b32-8ebc-fe0adf90df58-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.357539 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44bqj\" (UniqueName: \"kubernetes.io/projected/cef73daf-a49c-4b32-8ebc-fe0adf90df58-kube-api-access-44bqj\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.357676 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cef73daf-a49c-4b32-8ebc-fe0adf90df58-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.357828 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cef73daf-a49c-4b32-8ebc-fe0adf90df58-config-data\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.357900 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.357964 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cef73daf-a49c-4b32-8ebc-fe0adf90df58-logs\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.358102 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cef73daf-a49c-4b32-8ebc-fe0adf90df58-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.460346 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.460404 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cef73daf-a49c-4b32-8ebc-fe0adf90df58-logs\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.460462 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cef73daf-a49c-4b32-8ebc-fe0adf90df58-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.460507 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cef73daf-a49c-4b32-8ebc-fe0adf90df58-scripts\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.460534 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cef73daf-a49c-4b32-8ebc-fe0adf90df58-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.460595 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44bqj\" (UniqueName: \"kubernetes.io/projected/cef73daf-a49c-4b32-8ebc-fe0adf90df58-kube-api-access-44bqj\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.460655 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cef73daf-a49c-4b32-8ebc-fe0adf90df58-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.460733 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cef73daf-a49c-4b32-8ebc-fe0adf90df58-config-data\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.461299 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cef73daf-a49c-4b32-8ebc-fe0adf90df58-logs\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.461499 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cef73daf-a49c-4b32-8ebc-fe0adf90df58-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.473034 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.473081 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bd5683b2fac8da06378b2d5eb72c7d0b6faa54e75d4b318b8013499a38483353/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.474117 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cef73daf-a49c-4b32-8ebc-fe0adf90df58-scripts\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.474340 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cef73daf-a49c-4b32-8ebc-fe0adf90df58-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.474506 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cef73daf-a49c-4b32-8ebc-fe0adf90df58-config-data\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.477209 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cef73daf-a49c-4b32-8ebc-fe0adf90df58-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.489816 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44bqj\" (UniqueName: \"kubernetes.io/projected/cef73daf-a49c-4b32-8ebc-fe0adf90df58-kube-api-access-44bqj\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.614948 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.818608 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 21:39:51 crc kubenswrapper[4751]: I0130 21:39:51.455003 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:39:51 crc kubenswrapper[4751]: I0130 21:39:51.992955 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33588f5e-9224-4dd6-b689-0651c16d06bd" path="/var/lib/kubelet/pods/33588f5e-9224-4dd6-b689-0651c16d06bd/volumes" Jan 30 21:39:52 crc kubenswrapper[4751]: I0130 21:39:52.170007 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cef73daf-a49c-4b32-8ebc-fe0adf90df58","Type":"ContainerStarted","Data":"21fda0ee4b452e4895e60b33082ee76ed41388393b36d1134ac1dfeb12851f9c"} Jan 30 21:39:52 crc kubenswrapper[4751]: I0130 21:39:52.170049 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cef73daf-a49c-4b32-8ebc-fe0adf90df58","Type":"ContainerStarted","Data":"2cf83b9bee222cd565d9a4680e34f3affd4a5a161e9b021b056cbef0ae4e5192"} Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.183659 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cef73daf-a49c-4b32-8ebc-fe0adf90df58","Type":"ContainerStarted","Data":"e118abf9a7405284a471846ef041080bfb9c2acc14afcc380561ca808dce2a05"} Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.187670 4751 generic.go:334] "Generic (PLEG): container finished" podID="58e79616-9b52-47f9-a43e-01cbd487fbbd" containerID="38d6ea6d17555bee86d24d0120b47bfe85898a3b92da2d1783b64c466b54936d" exitCode=0 Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.187712 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"58e79616-9b52-47f9-a43e-01cbd487fbbd","Type":"ContainerDied","Data":"38d6ea6d17555bee86d24d0120b47bfe85898a3b92da2d1783b64c466b54936d"} Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.212382 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.212365628 podStartE2EDuration="3.212365628s" podCreationTimestamp="2026-01-30 21:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:39:53.211504735 +0000 UTC m=+1531.957327384" watchObservedRunningTime="2026-01-30 21:39:53.212365628 +0000 UTC m=+1531.958188277" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.320282 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.439251 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-internal-tls-certs\") pod \"58e79616-9b52-47f9-a43e-01cbd487fbbd\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.439383 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xqhk\" (UniqueName: \"kubernetes.io/projected/58e79616-9b52-47f9-a43e-01cbd487fbbd-kube-api-access-4xqhk\") pod \"58e79616-9b52-47f9-a43e-01cbd487fbbd\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.439449 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-scripts\") pod \"58e79616-9b52-47f9-a43e-01cbd487fbbd\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.439477 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58e79616-9b52-47f9-a43e-01cbd487fbbd-logs\") pod \"58e79616-9b52-47f9-a43e-01cbd487fbbd\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.439516 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-config-data\") pod \"58e79616-9b52-47f9-a43e-01cbd487fbbd\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.439577 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/58e79616-9b52-47f9-a43e-01cbd487fbbd-httpd-run\") pod \"58e79616-9b52-47f9-a43e-01cbd487fbbd\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.439597 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-combined-ca-bundle\") pod \"58e79616-9b52-47f9-a43e-01cbd487fbbd\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.440191 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58e79616-9b52-47f9-a43e-01cbd487fbbd-logs" (OuterVolumeSpecName: "logs") pod "58e79616-9b52-47f9-a43e-01cbd487fbbd" (UID: "58e79616-9b52-47f9-a43e-01cbd487fbbd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.440414 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58e79616-9b52-47f9-a43e-01cbd487fbbd-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "58e79616-9b52-47f9-a43e-01cbd487fbbd" (UID: "58e79616-9b52-47f9-a43e-01cbd487fbbd"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.441881 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"58e79616-9b52-47f9-a43e-01cbd487fbbd\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.442733 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58e79616-9b52-47f9-a43e-01cbd487fbbd-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.442752 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/58e79616-9b52-47f9-a43e-01cbd487fbbd-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.445156 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-scripts" (OuterVolumeSpecName: "scripts") pod "58e79616-9b52-47f9-a43e-01cbd487fbbd" (UID: "58e79616-9b52-47f9-a43e-01cbd487fbbd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.447921 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58e79616-9b52-47f9-a43e-01cbd487fbbd-kube-api-access-4xqhk" (OuterVolumeSpecName: "kube-api-access-4xqhk") pod "58e79616-9b52-47f9-a43e-01cbd487fbbd" (UID: "58e79616-9b52-47f9-a43e-01cbd487fbbd"). InnerVolumeSpecName "kube-api-access-4xqhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.471251 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227" (OuterVolumeSpecName: "glance") pod "58e79616-9b52-47f9-a43e-01cbd487fbbd" (UID: "58e79616-9b52-47f9-a43e-01cbd487fbbd"). InnerVolumeSpecName "pvc-2b6fe968-3470-4548-ade6-9a3644e74227". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.477117 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58e79616-9b52-47f9-a43e-01cbd487fbbd" (UID: "58e79616-9b52-47f9-a43e-01cbd487fbbd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.500177 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-config-data" (OuterVolumeSpecName: "config-data") pod "58e79616-9b52-47f9-a43e-01cbd487fbbd" (UID: "58e79616-9b52-47f9-a43e-01cbd487fbbd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.504897 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "58e79616-9b52-47f9-a43e-01cbd487fbbd" (UID: "58e79616-9b52-47f9-a43e-01cbd487fbbd"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.544529 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.544748 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") on node \"crc\" " Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.544863 4751 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.544957 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xqhk\" (UniqueName: \"kubernetes.io/projected/58e79616-9b52-47f9-a43e-01cbd487fbbd-kube-api-access-4xqhk\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.545037 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.545120 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.576821 4751 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.576982 4751 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2b6fe968-3470-4548-ade6-9a3644e74227" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227") on node "crc" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.646815 4751 reconciler_common.go:293] "Volume detached for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.958718 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.201352 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"58e79616-9b52-47f9-a43e-01cbd487fbbd","Type":"ContainerDied","Data":"c29e1210fa3b7bf7fada16b3ee12edca743fdf4523588bf99fc7a7b6aa8b0f6d"} Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.201444 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.201719 4751 scope.go:117] "RemoveContainer" containerID="38d6ea6d17555bee86d24d0120b47bfe85898a3b92da2d1783b64c466b54936d" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.234731 4751 scope.go:117] "RemoveContainer" containerID="63bb4deba3a7aa55abb8828c7f8386975555baf8db7e5316f55f82adf4041031" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.244752 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.258140 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.276789 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:39:54 crc kubenswrapper[4751]: E0130 21:39:54.277279 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e79616-9b52-47f9-a43e-01cbd487fbbd" containerName="glance-httpd" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.277297 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e79616-9b52-47f9-a43e-01cbd487fbbd" containerName="glance-httpd" Jan 30 21:39:54 crc kubenswrapper[4751]: E0130 21:39:54.277313 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e79616-9b52-47f9-a43e-01cbd487fbbd" containerName="glance-log" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.277362 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e79616-9b52-47f9-a43e-01cbd487fbbd" containerName="glance-log" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.277632 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="58e79616-9b52-47f9-a43e-01cbd487fbbd" containerName="glance-httpd" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.277660 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="58e79616-9b52-47f9-a43e-01cbd487fbbd" containerName="glance-log" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.278837 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.281891 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.282170 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.324839 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.362978 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dcf400d-5171-4388-bfbc-18d62a106a12-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.363056 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjl2v\" (UniqueName: \"kubernetes.io/projected/4dcf400d-5171-4388-bfbc-18d62a106a12-kube-api-access-wjl2v\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.363093 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dcf400d-5171-4388-bfbc-18d62a106a12-logs\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.363149 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dcf400d-5171-4388-bfbc-18d62a106a12-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.363166 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4dcf400d-5171-4388-bfbc-18d62a106a12-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.363339 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dcf400d-5171-4388-bfbc-18d62a106a12-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.363371 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dcf400d-5171-4388-bfbc-18d62a106a12-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.363405 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.465210 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjl2v\" (UniqueName: \"kubernetes.io/projected/4dcf400d-5171-4388-bfbc-18d62a106a12-kube-api-access-wjl2v\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.465265 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dcf400d-5171-4388-bfbc-18d62a106a12-logs\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.465372 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dcf400d-5171-4388-bfbc-18d62a106a12-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.465396 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4dcf400d-5171-4388-bfbc-18d62a106a12-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.465506 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dcf400d-5171-4388-bfbc-18d62a106a12-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.465553 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dcf400d-5171-4388-bfbc-18d62a106a12-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.465598 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.466046 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4dcf400d-5171-4388-bfbc-18d62a106a12-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.466239 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dcf400d-5171-4388-bfbc-18d62a106a12-logs\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.466925 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dcf400d-5171-4388-bfbc-18d62a106a12-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.468176 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.468210 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1439638cb8026f3fbd74a1d30ab35170ee3b35899e999b31e76311ef8605b4f/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.471746 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dcf400d-5171-4388-bfbc-18d62a106a12-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.471936 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dcf400d-5171-4388-bfbc-18d62a106a12-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.472539 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dcf400d-5171-4388-bfbc-18d62a106a12-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.486473 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dcf400d-5171-4388-bfbc-18d62a106a12-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.492163 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjl2v\" (UniqueName: \"kubernetes.io/projected/4dcf400d-5171-4388-bfbc-18d62a106a12-kube-api-access-wjl2v\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.518224 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.598673 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.976586 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:39:54 crc kubenswrapper[4751]: E0130 21:39:54.977629 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:39:55 crc kubenswrapper[4751]: I0130 21:39:55.262969 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:39:55 crc kubenswrapper[4751]: I0130 21:39:55.992425 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58e79616-9b52-47f9-a43e-01cbd487fbbd" path="/var/lib/kubelet/pods/58e79616-9b52-47f9-a43e-01cbd487fbbd/volumes" Jan 30 21:39:56 crc kubenswrapper[4751]: I0130 21:39:56.246263 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4dcf400d-5171-4388-bfbc-18d62a106a12","Type":"ContainerStarted","Data":"a19bf02ecb70e0fe883ae7aae03f7533c26003e4e1c6c0dcf98df324824484d2"} Jan 30 21:39:56 crc kubenswrapper[4751]: I0130 21:39:56.246561 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4dcf400d-5171-4388-bfbc-18d62a106a12","Type":"ContainerStarted","Data":"507de3209d8750a346e93ee56fa1c608fdc16a418050cdd1c7a897a63d663a5a"} Jan 30 21:39:56 crc kubenswrapper[4751]: E0130 21:39:56.853784 4751 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod551aecfb_7969_4644_ac50_b8f4c63002d3.slice/crio-conmon-33e12e7a910a881a922ed171c1d2a5e92dc23378252c88a8cc488f46dcc7cd9c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod551aecfb_7969_4644_ac50_b8f4c63002d3.slice/crio-33e12e7a910a881a922ed171c1d2a5e92dc23378252c88a8cc488f46dcc7cd9c.scope\": RecentStats: unable to find data in memory cache]" Jan 30 21:39:57 crc kubenswrapper[4751]: I0130 21:39:57.272090 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4dcf400d-5171-4388-bfbc-18d62a106a12","Type":"ContainerStarted","Data":"7619d54769c5b3e3db3335e6add7163abcd5624f641ef8fef9987ea8738ba811"} Jan 30 21:39:57 crc kubenswrapper[4751]: I0130 21:39:57.276291 4751 generic.go:334] "Generic (PLEG): container finished" podID="551aecfb-7969-4644-ac50-b8f4c63002d3" containerID="33e12e7a910a881a922ed171c1d2a5e92dc23378252c88a8cc488f46dcc7cd9c" exitCode=0 Jan 30 21:39:57 crc kubenswrapper[4751]: I0130 21:39:57.276357 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-47sz5" event={"ID":"551aecfb-7969-4644-ac50-b8f4c63002d3","Type":"ContainerDied","Data":"33e12e7a910a881a922ed171c1d2a5e92dc23378252c88a8cc488f46dcc7cd9c"} Jan 30 21:39:57 crc kubenswrapper[4751]: I0130 21:39:57.300076 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.300055999 podStartE2EDuration="3.300055999s" podCreationTimestamp="2026-01-30 21:39:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:39:57.291981143 +0000 UTC m=+1536.037803822" watchObservedRunningTime="2026-01-30 21:39:57.300055999 +0000 UTC m=+1536.045878648" Jan 30 21:39:58 crc kubenswrapper[4751]: I0130 21:39:58.703893 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:58 crc kubenswrapper[4751]: I0130 21:39:58.764540 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-scripts\") pod \"551aecfb-7969-4644-ac50-b8f4c63002d3\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " Jan 30 21:39:58 crc kubenswrapper[4751]: I0130 21:39:58.764720 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-config-data\") pod \"551aecfb-7969-4644-ac50-b8f4c63002d3\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " Jan 30 21:39:58 crc kubenswrapper[4751]: I0130 21:39:58.764761 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzpw7\" (UniqueName: \"kubernetes.io/projected/551aecfb-7969-4644-ac50-b8f4c63002d3-kube-api-access-fzpw7\") pod \"551aecfb-7969-4644-ac50-b8f4c63002d3\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " Jan 30 21:39:58 crc kubenswrapper[4751]: I0130 21:39:58.764929 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-combined-ca-bundle\") pod \"551aecfb-7969-4644-ac50-b8f4c63002d3\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " Jan 30 21:39:58 crc kubenswrapper[4751]: I0130 21:39:58.770767 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-scripts" (OuterVolumeSpecName: "scripts") pod "551aecfb-7969-4644-ac50-b8f4c63002d3" (UID: "551aecfb-7969-4644-ac50-b8f4c63002d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:58 crc kubenswrapper[4751]: I0130 21:39:58.772512 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/551aecfb-7969-4644-ac50-b8f4c63002d3-kube-api-access-fzpw7" (OuterVolumeSpecName: "kube-api-access-fzpw7") pod "551aecfb-7969-4644-ac50-b8f4c63002d3" (UID: "551aecfb-7969-4644-ac50-b8f4c63002d3"). InnerVolumeSpecName "kube-api-access-fzpw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:58 crc kubenswrapper[4751]: I0130 21:39:58.799305 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-config-data" (OuterVolumeSpecName: "config-data") pod "551aecfb-7969-4644-ac50-b8f4c63002d3" (UID: "551aecfb-7969-4644-ac50-b8f4c63002d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:58 crc kubenswrapper[4751]: I0130 21:39:58.806888 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "551aecfb-7969-4644-ac50-b8f4c63002d3" (UID: "551aecfb-7969-4644-ac50-b8f4c63002d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:58 crc kubenswrapper[4751]: I0130 21:39:58.867335 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:58 crc kubenswrapper[4751]: I0130 21:39:58.867391 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:58 crc kubenswrapper[4751]: I0130 21:39:58.867403 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzpw7\" (UniqueName: \"kubernetes.io/projected/551aecfb-7969-4644-ac50-b8f4c63002d3-kube-api-access-fzpw7\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:58 crc kubenswrapper[4751]: I0130 21:39:58.867413 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.300710 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-47sz5" event={"ID":"551aecfb-7969-4644-ac50-b8f4c63002d3","Type":"ContainerDied","Data":"ee0d949c9abfb18e45a0aba7521f0154b8bf9089739c141933f8355d900aff65"} Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.300763 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee0d949c9abfb18e45a0aba7521f0154b8bf9089739c141933f8355d900aff65" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.300776 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.412230 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 21:39:59 crc kubenswrapper[4751]: E0130 21:39:59.412702 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551aecfb-7969-4644-ac50-b8f4c63002d3" containerName="nova-cell0-conductor-db-sync" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.412718 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="551aecfb-7969-4644-ac50-b8f4c63002d3" containerName="nova-cell0-conductor-db-sync" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.412949 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="551aecfb-7969-4644-ac50-b8f4c63002d3" containerName="nova-cell0-conductor-db-sync" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.413714 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.415659 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5j6gn" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.418712 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.425796 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.486958 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g22rd\" (UniqueName: \"kubernetes.io/projected/b058a895-614b-4e97-840e-dbb229de8109-kube-api-access-g22rd\") pod \"nova-cell0-conductor-0\" (UID: \"b058a895-614b-4e97-840e-dbb229de8109\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.487234 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b058a895-614b-4e97-840e-dbb229de8109-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b058a895-614b-4e97-840e-dbb229de8109\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.487628 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b058a895-614b-4e97-840e-dbb229de8109-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b058a895-614b-4e97-840e-dbb229de8109\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.589547 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g22rd\" (UniqueName: \"kubernetes.io/projected/b058a895-614b-4e97-840e-dbb229de8109-kube-api-access-g22rd\") pod \"nova-cell0-conductor-0\" (UID: \"b058a895-614b-4e97-840e-dbb229de8109\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.589676 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b058a895-614b-4e97-840e-dbb229de8109-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b058a895-614b-4e97-840e-dbb229de8109\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.589805 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b058a895-614b-4e97-840e-dbb229de8109-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b058a895-614b-4e97-840e-dbb229de8109\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.593146 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b058a895-614b-4e97-840e-dbb229de8109-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b058a895-614b-4e97-840e-dbb229de8109\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.594878 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b058a895-614b-4e97-840e-dbb229de8109-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b058a895-614b-4e97-840e-dbb229de8109\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.615596 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g22rd\" (UniqueName: \"kubernetes.io/projected/b058a895-614b-4e97-840e-dbb229de8109-kube-api-access-g22rd\") pod \"nova-cell0-conductor-0\" (UID: \"b058a895-614b-4e97-840e-dbb229de8109\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.730279 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:00 crc kubenswrapper[4751]: I0130 21:40:00.268697 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 21:40:00 crc kubenswrapper[4751]: I0130 21:40:00.315898 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b058a895-614b-4e97-840e-dbb229de8109","Type":"ContainerStarted","Data":"798ef32e2296c6eacd1b15ce640930b8569775ee60956cf5d1bdd298f4ab3819"} Jan 30 21:40:00 crc kubenswrapper[4751]: I0130 21:40:00.819699 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 21:40:00 crc kubenswrapper[4751]: I0130 21:40:00.821516 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 21:40:00 crc kubenswrapper[4751]: I0130 21:40:00.869530 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 21:40:00 crc kubenswrapper[4751]: I0130 21:40:00.871250 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 21:40:01 crc kubenswrapper[4751]: I0130 21:40:01.328784 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b058a895-614b-4e97-840e-dbb229de8109","Type":"ContainerStarted","Data":"598f9c4cee6ede7c7793d1d8c3f43cdbd49b11f5aee115fa89d9905b7f71c5dc"} Jan 30 21:40:01 crc kubenswrapper[4751]: I0130 21:40:01.329880 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 21:40:01 crc kubenswrapper[4751]: I0130 21:40:01.329915 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 21:40:01 crc kubenswrapper[4751]: I0130 21:40:01.329924 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:01 crc kubenswrapper[4751]: I0130 21:40:01.355216 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.355197847 podStartE2EDuration="2.355197847s" podCreationTimestamp="2026-01-30 21:39:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:40:01.35048725 +0000 UTC m=+1540.096309899" watchObservedRunningTime="2026-01-30 21:40:01.355197847 +0000 UTC m=+1540.101020496" Jan 30 21:40:02 crc kubenswrapper[4751]: I0130 21:40:02.646983 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 21:40:03 crc kubenswrapper[4751]: I0130 21:40:03.369209 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:40:03 crc kubenswrapper[4751]: I0130 21:40:03.369563 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:40:04 crc kubenswrapper[4751]: I0130 21:40:04.002272 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 21:40:04 crc kubenswrapper[4751]: I0130 21:40:04.004489 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 21:40:04 crc kubenswrapper[4751]: I0130 21:40:04.379781 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="b058a895-614b-4e97-840e-dbb229de8109" containerName="nova-cell0-conductor-conductor" containerID="cri-o://598f9c4cee6ede7c7793d1d8c3f43cdbd49b11f5aee115fa89d9905b7f71c5dc" gracePeriod=30 Jan 30 21:40:04 crc kubenswrapper[4751]: I0130 21:40:04.599291 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 21:40:04 crc kubenswrapper[4751]: I0130 21:40:04.599363 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 21:40:04 crc kubenswrapper[4751]: I0130 21:40:04.644286 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 21:40:04 crc kubenswrapper[4751]: I0130 21:40:04.648546 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 21:40:05 crc kubenswrapper[4751]: I0130 21:40:05.392559 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 21:40:05 crc kubenswrapper[4751]: I0130 21:40:05.392606 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.293050 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.400719 4751 generic.go:334] "Generic (PLEG): container finished" podID="b058a895-614b-4e97-840e-dbb229de8109" containerID="598f9c4cee6ede7c7793d1d8c3f43cdbd49b11f5aee115fa89d9905b7f71c5dc" exitCode=0 Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.400783 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.400783 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b058a895-614b-4e97-840e-dbb229de8109","Type":"ContainerDied","Data":"598f9c4cee6ede7c7793d1d8c3f43cdbd49b11f5aee115fa89d9905b7f71c5dc"} Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.400975 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b058a895-614b-4e97-840e-dbb229de8109","Type":"ContainerDied","Data":"798ef32e2296c6eacd1b15ce640930b8569775ee60956cf5d1bdd298f4ab3819"} Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.401022 4751 scope.go:117] "RemoveContainer" containerID="598f9c4cee6ede7c7793d1d8c3f43cdbd49b11f5aee115fa89d9905b7f71c5dc" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.427393 4751 scope.go:117] "RemoveContainer" containerID="598f9c4cee6ede7c7793d1d8c3f43cdbd49b11f5aee115fa89d9905b7f71c5dc" Jan 30 21:40:06 crc kubenswrapper[4751]: E0130 21:40:06.427863 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"598f9c4cee6ede7c7793d1d8c3f43cdbd49b11f5aee115fa89d9905b7f71c5dc\": container with ID starting with 598f9c4cee6ede7c7793d1d8c3f43cdbd49b11f5aee115fa89d9905b7f71c5dc not found: ID does not exist" containerID="598f9c4cee6ede7c7793d1d8c3f43cdbd49b11f5aee115fa89d9905b7f71c5dc" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.427954 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"598f9c4cee6ede7c7793d1d8c3f43cdbd49b11f5aee115fa89d9905b7f71c5dc"} err="failed to get container status \"598f9c4cee6ede7c7793d1d8c3f43cdbd49b11f5aee115fa89d9905b7f71c5dc\": rpc error: code = NotFound desc = could not find container \"598f9c4cee6ede7c7793d1d8c3f43cdbd49b11f5aee115fa89d9905b7f71c5dc\": container with ID starting with 598f9c4cee6ede7c7793d1d8c3f43cdbd49b11f5aee115fa89d9905b7f71c5dc not found: ID does not exist" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.456929 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b058a895-614b-4e97-840e-dbb229de8109-config-data\") pod \"b058a895-614b-4e97-840e-dbb229de8109\" (UID: \"b058a895-614b-4e97-840e-dbb229de8109\") " Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.457125 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g22rd\" (UniqueName: \"kubernetes.io/projected/b058a895-614b-4e97-840e-dbb229de8109-kube-api-access-g22rd\") pod \"b058a895-614b-4e97-840e-dbb229de8109\" (UID: \"b058a895-614b-4e97-840e-dbb229de8109\") " Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.457267 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b058a895-614b-4e97-840e-dbb229de8109-combined-ca-bundle\") pod \"b058a895-614b-4e97-840e-dbb229de8109\" (UID: \"b058a895-614b-4e97-840e-dbb229de8109\") " Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.462893 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b058a895-614b-4e97-840e-dbb229de8109-kube-api-access-g22rd" (OuterVolumeSpecName: "kube-api-access-g22rd") pod "b058a895-614b-4e97-840e-dbb229de8109" (UID: "b058a895-614b-4e97-840e-dbb229de8109"). InnerVolumeSpecName "kube-api-access-g22rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.489270 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b058a895-614b-4e97-840e-dbb229de8109-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b058a895-614b-4e97-840e-dbb229de8109" (UID: "b058a895-614b-4e97-840e-dbb229de8109"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.492737 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b058a895-614b-4e97-840e-dbb229de8109-config-data" (OuterVolumeSpecName: "config-data") pod "b058a895-614b-4e97-840e-dbb229de8109" (UID: "b058a895-614b-4e97-840e-dbb229de8109"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.560663 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g22rd\" (UniqueName: \"kubernetes.io/projected/b058a895-614b-4e97-840e-dbb229de8109-kube-api-access-g22rd\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.560716 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b058a895-614b-4e97-840e-dbb229de8109-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.560735 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b058a895-614b-4e97-840e-dbb229de8109-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.750960 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.768044 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.779928 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 21:40:06 crc kubenswrapper[4751]: E0130 21:40:06.780587 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b058a895-614b-4e97-840e-dbb229de8109" containerName="nova-cell0-conductor-conductor" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.780607 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b058a895-614b-4e97-840e-dbb229de8109" containerName="nova-cell0-conductor-conductor" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.781202 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="b058a895-614b-4e97-840e-dbb229de8109" containerName="nova-cell0-conductor-conductor" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.781992 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.791296 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.793728 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5j6gn" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.793808 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.867565 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb304d7-db8e-4943-b0bc-d30a4332df91-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9bb304d7-db8e-4943-b0bc-d30a4332df91\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.868170 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnp5c\" (UniqueName: \"kubernetes.io/projected/9bb304d7-db8e-4943-b0bc-d30a4332df91-kube-api-access-cnp5c\") pod \"nova-cell0-conductor-0\" (UID: \"9bb304d7-db8e-4943-b0bc-d30a4332df91\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.868245 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb304d7-db8e-4943-b0bc-d30a4332df91-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9bb304d7-db8e-4943-b0bc-d30a4332df91\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.971079 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnp5c\" (UniqueName: \"kubernetes.io/projected/9bb304d7-db8e-4943-b0bc-d30a4332df91-kube-api-access-cnp5c\") pod \"nova-cell0-conductor-0\" (UID: \"9bb304d7-db8e-4943-b0bc-d30a4332df91\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.971142 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb304d7-db8e-4943-b0bc-d30a4332df91-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9bb304d7-db8e-4943-b0bc-d30a4332df91\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.971221 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb304d7-db8e-4943-b0bc-d30a4332df91-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9bb304d7-db8e-4943-b0bc-d30a4332df91\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.984053 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb304d7-db8e-4943-b0bc-d30a4332df91-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9bb304d7-db8e-4943-b0bc-d30a4332df91\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.992931 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb304d7-db8e-4943-b0bc-d30a4332df91-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9bb304d7-db8e-4943-b0bc-d30a4332df91\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:07 crc kubenswrapper[4751]: I0130 21:40:07.041005 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnp5c\" (UniqueName: \"kubernetes.io/projected/9bb304d7-db8e-4943-b0bc-d30a4332df91-kube-api-access-cnp5c\") pod \"nova-cell0-conductor-0\" (UID: \"9bb304d7-db8e-4943-b0bc-d30a4332df91\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:07 crc kubenswrapper[4751]: I0130 21:40:07.110938 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:07 crc kubenswrapper[4751]: I0130 21:40:07.414728 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:40:07 crc kubenswrapper[4751]: I0130 21:40:07.415022 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:40:07 crc kubenswrapper[4751]: I0130 21:40:07.623451 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 21:40:07 crc kubenswrapper[4751]: W0130 21:40:07.624850 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb304d7_db8e_4943_b0bc_d30a4332df91.slice/crio-98651a0114ca2c5a245b87fbd706bda83f03044f38a509e1f179b0fe7ac20007 WatchSource:0}: Error finding container 98651a0114ca2c5a245b87fbd706bda83f03044f38a509e1f179b0fe7ac20007: Status 404 returned error can't find the container with id 98651a0114ca2c5a245b87fbd706bda83f03044f38a509e1f179b0fe7ac20007 Jan 30 21:40:07 crc kubenswrapper[4751]: I0130 21:40:07.654213 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 21:40:07 crc kubenswrapper[4751]: I0130 21:40:07.662069 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 21:40:08 crc kubenswrapper[4751]: I0130 21:40:08.006479 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b058a895-614b-4e97-840e-dbb229de8109" path="/var/lib/kubelet/pods/b058a895-614b-4e97-840e-dbb229de8109/volumes" Jan 30 21:40:08 crc kubenswrapper[4751]: I0130 21:40:08.426066 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9bb304d7-db8e-4943-b0bc-d30a4332df91","Type":"ContainerStarted","Data":"325f8f8f8ab243f01a7441d40332d9370852ec622074345f9b9f4fa08443a408"} Jan 30 21:40:08 crc kubenswrapper[4751]: I0130 21:40:08.426834 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:08 crc kubenswrapper[4751]: I0130 21:40:08.426852 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9bb304d7-db8e-4943-b0bc-d30a4332df91","Type":"ContainerStarted","Data":"98651a0114ca2c5a245b87fbd706bda83f03044f38a509e1f179b0fe7ac20007"} Jan 30 21:40:08 crc kubenswrapper[4751]: I0130 21:40:08.445076 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.445054484 podStartE2EDuration="2.445054484s" podCreationTimestamp="2026-01-30 21:40:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:40:08.444601742 +0000 UTC m=+1547.190424391" watchObservedRunningTime="2026-01-30 21:40:08.445054484 +0000 UTC m=+1547.190877133" Jan 30 21:40:09 crc kubenswrapper[4751]: I0130 21:40:09.980192 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:40:09 crc kubenswrapper[4751]: E0130 21:40:09.980831 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.160634 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.696786 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-x464h"] Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.698160 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.700039 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.700426 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.735023 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-x464h"] Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.813793 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-x464h\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.813922 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-scripts\") pod \"nova-cell0-cell-mapping-x464h\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.813987 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-config-data\") pod \"nova-cell0-cell-mapping-x464h\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.814034 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr8ls\" (UniqueName: \"kubernetes.io/projected/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-kube-api-access-gr8ls\") pod \"nova-cell0-cell-mapping-x464h\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.884953 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.886426 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.891738 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.901808 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.913064 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-vm8dd"] Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.917125 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-x464h\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.917285 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-scripts\") pod \"nova-cell0-cell-mapping-x464h\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.917439 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-config-data\") pod \"nova-cell0-cell-mapping-x464h\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.917531 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr8ls\" (UniqueName: \"kubernetes.io/projected/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-kube-api-access-gr8ls\") pod \"nova-cell0-cell-mapping-x464h\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.921399 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-vm8dd" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.933408 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-scripts\") pod \"nova-cell0-cell-mapping-x464h\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.951634 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-x464h\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.952923 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-config-data\") pod \"nova-cell0-cell-mapping-x464h\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.965521 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr8ls\" (UniqueName: \"kubernetes.io/projected/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-kube-api-access-gr8ls\") pod \"nova-cell0-cell-mapping-x464h\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.005846 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-vm8dd"] Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.038818 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.039896 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b97x5\" (UniqueName: \"kubernetes.io/projected/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-kube-api-access-b97x5\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.039938 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.039977 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrph8\" (UniqueName: \"kubernetes.io/projected/f243fc38-73c3-44ef-98b1-8c3086761087-kube-api-access-mrph8\") pod \"aodh-db-create-vm8dd\" (UID: \"f243fc38-73c3-44ef-98b1-8c3086761087\") " pod="openstack/aodh-db-create-vm8dd" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.040072 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f243fc38-73c3-44ef-98b1-8c3086761087-operator-scripts\") pod \"aodh-db-create-vm8dd\" (UID: \"f243fc38-73c3-44ef-98b1-8c3086761087\") " pod="openstack/aodh-db-create-vm8dd" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.040135 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.129376 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.131901 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.142022 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrph8\" (UniqueName: \"kubernetes.io/projected/f243fc38-73c3-44ef-98b1-8c3086761087-kube-api-access-mrph8\") pod \"aodh-db-create-vm8dd\" (UID: \"f243fc38-73c3-44ef-98b1-8c3086761087\") " pod="openstack/aodh-db-create-vm8dd" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.142123 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f243fc38-73c3-44ef-98b1-8c3086761087-operator-scripts\") pod \"aodh-db-create-vm8dd\" (UID: \"f243fc38-73c3-44ef-98b1-8c3086761087\") " pod="openstack/aodh-db-create-vm8dd" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.142183 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.142262 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b97x5\" (UniqueName: \"kubernetes.io/projected/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-kube-api-access-b97x5\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.142283 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.143845 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f243fc38-73c3-44ef-98b1-8c3086761087-operator-scripts\") pod \"aodh-db-create-vm8dd\" (UID: \"f243fc38-73c3-44ef-98b1-8c3086761087\") " pod="openstack/aodh-db-create-vm8dd" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.149662 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.156152 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.185586 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrph8\" (UniqueName: \"kubernetes.io/projected/f243fc38-73c3-44ef-98b1-8c3086761087-kube-api-access-mrph8\") pod \"aodh-db-create-vm8dd\" (UID: \"f243fc38-73c3-44ef-98b1-8c3086761087\") " pod="openstack/aodh-db-create-vm8dd" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.189509 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.192439 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.229170 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b97x5\" (UniqueName: \"kubernetes.io/projected/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-kube-api-access-b97x5\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.249877 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0010-account-create-update-t2pkp"] Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.251388 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0010-account-create-update-t2pkp" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.269542 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.279533 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.279676 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsttj\" (UniqueName: \"kubernetes.io/projected/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-kube-api-access-dsttj\") pod \"nova-api-0\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.279709 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-logs\") pod \"nova-api-0\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.279878 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-config-data\") pod \"nova-api-0\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.284671 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0010-account-create-update-t2pkp"] Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.316973 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.327422 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.334720 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.343706 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.351657 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-vm8dd" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.381705 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-config-data\") pod \"nova-metadata-0\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.381749 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.381783 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-logs\") pod \"nova-metadata-0\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.381801 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.381847 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf8kb\" (UniqueName: \"kubernetes.io/projected/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-kube-api-access-cf8kb\") pod \"nova-metadata-0\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.381889 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsttj\" (UniqueName: \"kubernetes.io/projected/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-kube-api-access-dsttj\") pod \"nova-api-0\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.381911 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-logs\") pod \"nova-api-0\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.381930 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eb55a4f-933c-4871-b2d4-aed75e1449d7-operator-scripts\") pod \"aodh-0010-account-create-update-t2pkp\" (UID: \"8eb55a4f-933c-4871-b2d4-aed75e1449d7\") " pod="openstack/aodh-0010-account-create-update-t2pkp" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.382042 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-config-data\") pod \"nova-api-0\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.385460 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvbkg\" (UniqueName: \"kubernetes.io/projected/8eb55a4f-933c-4871-b2d4-aed75e1449d7-kube-api-access-zvbkg\") pod \"aodh-0010-account-create-update-t2pkp\" (UID: \"8eb55a4f-933c-4871-b2d4-aed75e1449d7\") " pod="openstack/aodh-0010-account-create-update-t2pkp" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.386934 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-logs\") pod \"nova-api-0\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.387986 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.389713 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.396621 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.397011 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.402593 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-config-data\") pod \"nova-api-0\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.423172 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsttj\" (UniqueName: \"kubernetes.io/projected/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-kube-api-access-dsttj\") pod \"nova-api-0\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.465962 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.492830 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf8kb\" (UniqueName: \"kubernetes.io/projected/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-kube-api-access-cf8kb\") pod \"nova-metadata-0\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.492873 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.492939 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eb55a4f-933c-4871-b2d4-aed75e1449d7-operator-scripts\") pod \"aodh-0010-account-create-update-t2pkp\" (UID: \"8eb55a4f-933c-4871-b2d4-aed75e1449d7\") " pod="openstack/aodh-0010-account-create-update-t2pkp" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.493071 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvbkg\" (UniqueName: \"kubernetes.io/projected/8eb55a4f-933c-4871-b2d4-aed75e1449d7-kube-api-access-zvbkg\") pod \"aodh-0010-account-create-update-t2pkp\" (UID: \"8eb55a4f-933c-4871-b2d4-aed75e1449d7\") " pod="openstack/aodh-0010-account-create-update-t2pkp" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.493109 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvz4f\" (UniqueName: \"kubernetes.io/projected/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-kube-api-access-fvz4f\") pod \"nova-scheduler-0\" (UID: \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.493146 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-config-data\") pod \"nova-metadata-0\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.493175 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-logs\") pod \"nova-metadata-0\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.493191 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.493209 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-config-data\") pod \"nova-scheduler-0\" (UID: \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.499571 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eb55a4f-933c-4871-b2d4-aed75e1449d7-operator-scripts\") pod \"aodh-0010-account-create-update-t2pkp\" (UID: \"8eb55a4f-933c-4871-b2d4-aed75e1449d7\") " pod="openstack/aodh-0010-account-create-update-t2pkp" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.499995 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-logs\") pod \"nova-metadata-0\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.514398 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-config-data\") pod \"nova-metadata-0\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.517527 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.518099 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.525821 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf8kb\" (UniqueName: \"kubernetes.io/projected/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-kube-api-access-cf8kb\") pod \"nova-metadata-0\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.528048 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-hpws7"] Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.530427 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.535866 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvbkg\" (UniqueName: \"kubernetes.io/projected/8eb55a4f-933c-4871-b2d4-aed75e1449d7-kube-api-access-zvbkg\") pod \"aodh-0010-account-create-update-t2pkp\" (UID: \"8eb55a4f-933c-4871-b2d4-aed75e1449d7\") " pod="openstack/aodh-0010-account-create-update-t2pkp" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.578648 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-hpws7"] Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.607575 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvz4f\" (UniqueName: \"kubernetes.io/projected/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-kube-api-access-fvz4f\") pod \"nova-scheduler-0\" (UID: \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.607667 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-config-data\") pod \"nova-scheduler-0\" (UID: \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.607725 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.625279 4751 generic.go:334] "Generic (PLEG): container finished" podID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerID="1fc41ef015f899a24b1533353b6987552c41d8844cf586e1769d6cc1f39a0c6a" exitCode=137 Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.625584 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511","Type":"ContainerDied","Data":"1fc41ef015f899a24b1533353b6987552c41d8844cf586e1769d6cc1f39a0c6a"} Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.640507 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-config-data\") pod \"nova-scheduler-0\" (UID: \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.664396 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.665718 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvz4f\" (UniqueName: \"kubernetes.io/projected/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-kube-api-access-fvz4f\") pod \"nova-scheduler-0\" (UID: \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.673200 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.718503 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0010-account-create-update-t2pkp" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.728397 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-config\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.728452 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.728502 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qn7n\" (UniqueName: \"kubernetes.io/projected/f5ccd9fd-19b5-4def-9fec-de483cdc8282-kube-api-access-4qn7n\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.728525 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.728638 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.729394 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.784199 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.827636 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.832398 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.832496 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-config\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.832534 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.832596 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qn7n\" (UniqueName: \"kubernetes.io/projected/f5ccd9fd-19b5-4def-9fec-de483cdc8282-kube-api-access-4qn7n\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.832618 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.832687 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.833726 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.834344 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.834934 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-config\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.835487 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.836413 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.860712 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qn7n\" (UniqueName: \"kubernetes.io/projected/f5ccd9fd-19b5-4def-9fec-de483cdc8282-kube-api-access-4qn7n\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.905520 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.011479 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-x464h"] Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.289014 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.388795 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-sg-core-conf-yaml\") pod \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.389095 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-config-data\") pod \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.389301 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-combined-ca-bundle\") pod \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.389467 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-run-httpd\") pod \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.389530 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqkxn\" (UniqueName: \"kubernetes.io/projected/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-kube-api-access-dqkxn\") pod \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.389607 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-scripts\") pod \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.389627 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-log-httpd\") pod \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.390510 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" (UID: "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.390972 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" (UID: "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.396821 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-kube-api-access-dqkxn" (OuterVolumeSpecName: "kube-api-access-dqkxn") pod "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" (UID: "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511"). InnerVolumeSpecName "kube-api-access-dqkxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.401467 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-scripts" (OuterVolumeSpecName: "scripts") pod "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" (UID: "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.457835 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" (UID: "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.493917 4751 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.493950 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqkxn\" (UniqueName: \"kubernetes.io/projected/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-kube-api-access-dqkxn\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.493962 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.493970 4751 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.493979 4751 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.499454 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" (UID: "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.581359 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-php6q"] Jan 30 21:40:14 crc kubenswrapper[4751]: E0130 21:40:14.581848 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="sg-core" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.581862 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="sg-core" Jan 30 21:40:14 crc kubenswrapper[4751]: E0130 21:40:14.581877 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="ceilometer-central-agent" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.581884 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="ceilometer-central-agent" Jan 30 21:40:14 crc kubenswrapper[4751]: E0130 21:40:14.581900 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="ceilometer-notification-agent" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.581906 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="ceilometer-notification-agent" Jan 30 21:40:14 crc kubenswrapper[4751]: E0130 21:40:14.581934 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="proxy-httpd" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.581940 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="proxy-httpd" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.582153 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="ceilometer-notification-agent" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.582164 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="ceilometer-central-agent" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.582175 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="sg-core" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.582186 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="proxy-httpd" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.583134 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.589967 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.590228 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.597981 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.615614 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-php6q"] Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.649912 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-vm8dd"] Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.655911 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x464h" event={"ID":"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd","Type":"ContainerStarted","Data":"17690b46bb105b4071eb9244efb55112436407df788ac66de199405e58cab561"} Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.655948 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x464h" event={"ID":"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd","Type":"ContainerStarted","Data":"e386bc63e4c6fd0ed66a429bd207a6d77378c3bd08ff0d82d176f966ccb0e22d"} Jan 30 21:40:14 crc kubenswrapper[4751]: W0130 21:40:14.658786 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecc4c40f_f619_45e9_9e3d_baf3a3440ca2.slice/crio-3024c16ad9c3581d8359e24b5329d3dd5dbc599abb3804944a084a5026b4bea0 WatchSource:0}: Error finding container 3024c16ad9c3581d8359e24b5329d3dd5dbc599abb3804944a084a5026b4bea0: Status 404 returned error can't find the container with id 3024c16ad9c3581d8359e24b5329d3dd5dbc599abb3804944a084a5026b4bea0 Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.669577 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-config-data" (OuterVolumeSpecName: "config-data") pod "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" (UID: "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.676311 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511","Type":"ContainerDied","Data":"4884de2c448f76056d8317537ec7b098481217936fd9e2f04eb669de3c631faf"} Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.676373 4751 scope.go:117] "RemoveContainer" containerID="1fc41ef015f899a24b1533353b6987552c41d8844cf586e1769d6cc1f39a0c6a" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.676516 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.693174 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.701785 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-config-data\") pod \"nova-cell1-conductor-db-sync-php6q\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.701890 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz5pm\" (UniqueName: \"kubernetes.io/projected/e675971e-ba0e-4630-bc1b-bdf47a433dd7-kube-api-access-jz5pm\") pod \"nova-cell1-conductor-db-sync-php6q\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.701944 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-scripts\") pod \"nova-cell1-conductor-db-sync-php6q\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.701986 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-php6q\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.702051 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.722432 4751 scope.go:117] "RemoveContainer" containerID="6d06091377b2a8fc82610799f2cf8764b0bd657f28f4bab21ad30e6eb045d8d2" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.725382 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.735860 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-x464h" podStartSLOduration=2.735835339 podStartE2EDuration="2.735835339s" podCreationTimestamp="2026-01-30 21:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:40:14.686036145 +0000 UTC m=+1553.431858794" watchObservedRunningTime="2026-01-30 21:40:14.735835339 +0000 UTC m=+1553.481657978" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.769171 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.797473 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.805865 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-config-data\") pod \"nova-cell1-conductor-db-sync-php6q\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.805973 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz5pm\" (UniqueName: \"kubernetes.io/projected/e675971e-ba0e-4630-bc1b-bdf47a433dd7-kube-api-access-jz5pm\") pod \"nova-cell1-conductor-db-sync-php6q\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.806023 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-scripts\") pod \"nova-cell1-conductor-db-sync-php6q\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.806058 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-php6q\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.810903 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-php6q\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.811163 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-scripts\") pod \"nova-cell1-conductor-db-sync-php6q\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.811994 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-config-data\") pod \"nova-cell1-conductor-db-sync-php6q\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.816691 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.820213 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.825298 4751 scope.go:117] "RemoveContainer" containerID="9c01cf6df6cfdacc48a6527bc7f77429e8ab33e1ed7d506f200e6257569ad93c" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.825678 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz5pm\" (UniqueName: \"kubernetes.io/projected/e675971e-ba0e-4630-bc1b-bdf47a433dd7-kube-api-access-jz5pm\") pod \"nova-cell1-conductor-db-sync-php6q\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.825832 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.826382 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.834064 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.859522 4751 scope.go:117] "RemoveContainer" containerID="cfdea75b80256723e6e5c7537ac03523b96b0f4ab2bf0621af6d62950a93c5b8" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.928987 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.011169 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-scripts\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.015110 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-log-httpd\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.015425 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-config-data\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.015558 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.015639 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.015754 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c68jd\" (UniqueName: \"kubernetes.io/projected/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-kube-api-access-c68jd\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.015788 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-run-httpd\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.060553 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0010-account-create-update-t2pkp"] Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.094425 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.120577 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.127089 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-config-data\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.127156 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.127195 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.127242 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c68jd\" (UniqueName: \"kubernetes.io/projected/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-kube-api-access-c68jd\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.127260 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-run-httpd\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.128433 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-scripts\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.128578 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-log-httpd\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.129113 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-run-httpd\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.130118 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-log-httpd\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.137135 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.140503 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-scripts\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.141392 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-config-data\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.164125 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c68jd\" (UniqueName: \"kubernetes.io/projected/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-kube-api-access-c68jd\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.167278 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-hpws7"] Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.180008 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.467143 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.635644 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-php6q"] Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.707814 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8","Type":"ContainerStarted","Data":"84126f69388906672210fb0c0f5b79f09ceedc6fc66204e43ac117768cbfb6e9"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.719135 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-php6q" event={"ID":"e675971e-ba0e-4630-bc1b-bdf47a433dd7","Type":"ContainerStarted","Data":"86f75d724ed17fcc60b6ecec83042af7088c44a17f4847501a5109d876e1517b"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.721906 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5","Type":"ContainerStarted","Data":"0603105be1749a5d28001ba428223b5f3edc9bed1bd5953cb98748d034ddf6d5"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.724896 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1","Type":"ContainerStarted","Data":"6634a7ec1926462565ee356b01be6be019e25607c220ad7ffda5036ec6de3853"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.727945 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0010-account-create-update-t2pkp" event={"ID":"8eb55a4f-933c-4871-b2d4-aed75e1449d7","Type":"ContainerStarted","Data":"c7bbfed7681d291cdba9800f3f96dcb721e8fd4853af323f0d29ccee985d7e37"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.727967 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0010-account-create-update-t2pkp" event={"ID":"8eb55a4f-933c-4871-b2d4-aed75e1449d7","Type":"ContainerStarted","Data":"d8d5341ff0931f82eac8cd8b45647c0f80fe4a045d5a4f8a89839f0597b82ef4"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.731064 4751 generic.go:334] "Generic (PLEG): container finished" podID="f5ccd9fd-19b5-4def-9fec-de483cdc8282" containerID="2cef368e1d9de3d2fb099a0412649b6c02ad1c0e0295100cf195bfffa3dcf34f" exitCode=0 Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.731122 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" event={"ID":"f5ccd9fd-19b5-4def-9fec-de483cdc8282","Type":"ContainerDied","Data":"2cef368e1d9de3d2fb099a0412649b6c02ad1c0e0295100cf195bfffa3dcf34f"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.731139 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" event={"ID":"f5ccd9fd-19b5-4def-9fec-de483cdc8282","Type":"ContainerStarted","Data":"90582d1ee044bf5a553f8b95b8254b85197e43f8fadf0df2ec897950782d7dc8"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.733014 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2","Type":"ContainerStarted","Data":"3024c16ad9c3581d8359e24b5329d3dd5dbc599abb3804944a084a5026b4bea0"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.734429 4751 generic.go:334] "Generic (PLEG): container finished" podID="f243fc38-73c3-44ef-98b1-8c3086761087" containerID="6d49b61e92e6eef2d8083686a2afeb4d6ae7d468f3b7fa9aa7d17b2c30415daf" exitCode=0 Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.734534 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-vm8dd" event={"ID":"f243fc38-73c3-44ef-98b1-8c3086761087","Type":"ContainerDied","Data":"6d49b61e92e6eef2d8083686a2afeb4d6ae7d468f3b7fa9aa7d17b2c30415daf"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.734559 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-vm8dd" event={"ID":"f243fc38-73c3-44ef-98b1-8c3086761087","Type":"ContainerStarted","Data":"e9093f8aa53412c98d719622a4c9f60b070eeca780e5b7205a17f90369cae654"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.778162 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0010-account-create-update-t2pkp" podStartSLOduration=2.778143485 podStartE2EDuration="2.778143485s" podCreationTimestamp="2026-01-30 21:40:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:40:15.743757883 +0000 UTC m=+1554.489580532" watchObservedRunningTime="2026-01-30 21:40:15.778143485 +0000 UTC m=+1554.523966134" Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:16.007742 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" path="/var/lib/kubelet/pods/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511/volumes" Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:16.745646 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-php6q" event={"ID":"e675971e-ba0e-4630-bc1b-bdf47a433dd7","Type":"ContainerStarted","Data":"15b7f342e1abdf85738c2010d50dd3bc6a8ad893d7ecab47d753b5a1b032305d"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:16.748225 4751 generic.go:334] "Generic (PLEG): container finished" podID="8eb55a4f-933c-4871-b2d4-aed75e1449d7" containerID="c7bbfed7681d291cdba9800f3f96dcb721e8fd4853af323f0d29ccee985d7e37" exitCode=0 Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:16.748287 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0010-account-create-update-t2pkp" event={"ID":"8eb55a4f-933c-4871-b2d4-aed75e1449d7","Type":"ContainerDied","Data":"c7bbfed7681d291cdba9800f3f96dcb721e8fd4853af323f0d29ccee985d7e37"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:16.751830 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" event={"ID":"f5ccd9fd-19b5-4def-9fec-de483cdc8282","Type":"ContainerStarted","Data":"aa2893876b0b686f16a08289b6eaf353d9eb4d024f387b3760981d23221d7e36"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:16.751974 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:16.772762 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-php6q" podStartSLOduration=2.772739833 podStartE2EDuration="2.772739833s" podCreationTimestamp="2026-01-30 21:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:40:16.763599138 +0000 UTC m=+1555.509421787" watchObservedRunningTime="2026-01-30 21:40:16.772739833 +0000 UTC m=+1555.518562482" Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:16.793111 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" podStartSLOduration=3.793092078 podStartE2EDuration="3.793092078s" podCreationTimestamp="2026-01-30 21:40:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:40:16.785597547 +0000 UTC m=+1555.531420216" watchObservedRunningTime="2026-01-30 21:40:16.793092078 +0000 UTC m=+1555.538914727" Jan 30 21:40:17 crc kubenswrapper[4751]: I0130 21:40:17.096671 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:40:17 crc kubenswrapper[4751]: I0130 21:40:17.116074 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 21:40:17 crc kubenswrapper[4751]: I0130 21:40:17.285794 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:40:17 crc kubenswrapper[4751]: W0130 21:40:17.799869 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e6b4242_4cff_47ed_a1a0_d13cb8cb3f08.slice/crio-45f2a90ecd716a5036b578fd96b6b2949721363815ae3cd0cb7a739c6367674a WatchSource:0}: Error finding container 45f2a90ecd716a5036b578fd96b6b2949721363815ae3cd0cb7a739c6367674a: Status 404 returned error can't find the container with id 45f2a90ecd716a5036b578fd96b6b2949721363815ae3cd0cb7a739c6367674a Jan 30 21:40:18 crc kubenswrapper[4751]: I0130 21:40:18.786706 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08","Type":"ContainerStarted","Data":"45f2a90ecd716a5036b578fd96b6b2949721363815ae3cd0cb7a739c6367674a"} Jan 30 21:40:18 crc kubenswrapper[4751]: I0130 21:40:18.951645 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0010-account-create-update-t2pkp" Jan 30 21:40:18 crc kubenswrapper[4751]: I0130 21:40:18.988690 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eb55a4f-933c-4871-b2d4-aed75e1449d7-operator-scripts\") pod \"8eb55a4f-933c-4871-b2d4-aed75e1449d7\" (UID: \"8eb55a4f-933c-4871-b2d4-aed75e1449d7\") " Jan 30 21:40:18 crc kubenswrapper[4751]: I0130 21:40:18.988735 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvbkg\" (UniqueName: \"kubernetes.io/projected/8eb55a4f-933c-4871-b2d4-aed75e1449d7-kube-api-access-zvbkg\") pod \"8eb55a4f-933c-4871-b2d4-aed75e1449d7\" (UID: \"8eb55a4f-933c-4871-b2d4-aed75e1449d7\") " Jan 30 21:40:18 crc kubenswrapper[4751]: I0130 21:40:18.990094 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eb55a4f-933c-4871-b2d4-aed75e1449d7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8eb55a4f-933c-4871-b2d4-aed75e1449d7" (UID: "8eb55a4f-933c-4871-b2d4-aed75e1449d7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.002522 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eb55a4f-933c-4871-b2d4-aed75e1449d7-kube-api-access-zvbkg" (OuterVolumeSpecName: "kube-api-access-zvbkg") pod "8eb55a4f-933c-4871-b2d4-aed75e1449d7" (UID: "8eb55a4f-933c-4871-b2d4-aed75e1449d7"). InnerVolumeSpecName "kube-api-access-zvbkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.098724 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eb55a4f-933c-4871-b2d4-aed75e1449d7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.098772 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvbkg\" (UniqueName: \"kubernetes.io/projected/8eb55a4f-933c-4871-b2d4-aed75e1449d7-kube-api-access-zvbkg\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.682946 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-vm8dd" Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.712158 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrph8\" (UniqueName: \"kubernetes.io/projected/f243fc38-73c3-44ef-98b1-8c3086761087-kube-api-access-mrph8\") pod \"f243fc38-73c3-44ef-98b1-8c3086761087\" (UID: \"f243fc38-73c3-44ef-98b1-8c3086761087\") " Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.712440 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f243fc38-73c3-44ef-98b1-8c3086761087-operator-scripts\") pod \"f243fc38-73c3-44ef-98b1-8c3086761087\" (UID: \"f243fc38-73c3-44ef-98b1-8c3086761087\") " Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.713262 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f243fc38-73c3-44ef-98b1-8c3086761087-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f243fc38-73c3-44ef-98b1-8c3086761087" (UID: "f243fc38-73c3-44ef-98b1-8c3086761087"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.717389 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f243fc38-73c3-44ef-98b1-8c3086761087-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.739718 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f243fc38-73c3-44ef-98b1-8c3086761087-kube-api-access-mrph8" (OuterVolumeSpecName: "kube-api-access-mrph8") pod "f243fc38-73c3-44ef-98b1-8c3086761087" (UID: "f243fc38-73c3-44ef-98b1-8c3086761087"). InnerVolumeSpecName "kube-api-access-mrph8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.808120 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0010-account-create-update-t2pkp" event={"ID":"8eb55a4f-933c-4871-b2d4-aed75e1449d7","Type":"ContainerDied","Data":"d8d5341ff0931f82eac8cd8b45647c0f80fe4a045d5a4f8a89839f0597b82ef4"} Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.808157 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8d5341ff0931f82eac8cd8b45647c0f80fe4a045d5a4f8a89839f0597b82ef4" Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.808221 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0010-account-create-update-t2pkp" Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.819456 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrph8\" (UniqueName: \"kubernetes.io/projected/f243fc38-73c3-44ef-98b1-8c3086761087-kube-api-access-mrph8\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.833455 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-vm8dd" event={"ID":"f243fc38-73c3-44ef-98b1-8c3086761087","Type":"ContainerDied","Data":"e9093f8aa53412c98d719622a4c9f60b070eeca780e5b7205a17f90369cae654"} Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.833504 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9093f8aa53412c98d719622a4c9f60b070eeca780e5b7205a17f90369cae654" Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.833567 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-vm8dd" Jan 30 21:40:20 crc kubenswrapper[4751]: I0130 21:40:20.849678 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1","Type":"ContainerStarted","Data":"dc31a52f5646a180bf7d41d4f12f928e7c63335be084922d3c985b8fa786c23e"} Jan 30 21:40:20 crc kubenswrapper[4751]: I0130 21:40:20.856516 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08","Type":"ContainerStarted","Data":"6c2f3f9f8e38206f31b75b809fd10381f8bc3e6137676fa3ac5b692f4ab1aec1"} Jan 30 21:40:20 crc kubenswrapper[4751]: I0130 21:40:20.861817 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2","Type":"ContainerStarted","Data":"c26392c8774fa05fa8bed1df4d07f08e6743e53be396dfdb4a19f20047df588d"} Jan 30 21:40:20 crc kubenswrapper[4751]: I0130 21:40:20.861882 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2","Type":"ContainerStarted","Data":"a12f454f2fa43055f0701b691728e808dabdc8866c986b81cbe1626dd059cfd8"} Jan 30 21:40:20 crc kubenswrapper[4751]: I0130 21:40:20.866906 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" containerName="nova-metadata-log" containerID="cri-o://4aefbe2f56c7f9dcf869877e8817164ce4745373ce695b6a57d0467b39716da2" gracePeriod=30 Jan 30 21:40:20 crc kubenswrapper[4751]: I0130 21:40:20.866988 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5","Type":"ContainerStarted","Data":"104eb5b3de599ec92d6566244c8bd9716b80d0669cf4726ec79bfd7817491fc7"} Jan 30 21:40:20 crc kubenswrapper[4751]: I0130 21:40:20.867010 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5","Type":"ContainerStarted","Data":"4aefbe2f56c7f9dcf869877e8817164ce4745373ce695b6a57d0467b39716da2"} Jan 30 21:40:20 crc kubenswrapper[4751]: I0130 21:40:20.867051 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" containerName="nova-metadata-metadata" containerID="cri-o://104eb5b3de599ec92d6566244c8bd9716b80d0669cf4726ec79bfd7817491fc7" gracePeriod=30 Jan 30 21:40:20 crc kubenswrapper[4751]: I0130 21:40:20.875212 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.4533252770000002 podStartE2EDuration="7.875195028s" podCreationTimestamp="2026-01-30 21:40:13 +0000 UTC" firstStartedPulling="2026-01-30 21:40:15.118537578 +0000 UTC m=+1553.864360227" lastFinishedPulling="2026-01-30 21:40:19.540407329 +0000 UTC m=+1558.286229978" observedRunningTime="2026-01-30 21:40:20.872846295 +0000 UTC m=+1559.618668944" watchObservedRunningTime="2026-01-30 21:40:20.875195028 +0000 UTC m=+1559.621017697" Jan 30 21:40:20 crc kubenswrapper[4751]: I0130 21:40:20.896243 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8","Type":"ContainerStarted","Data":"37d52d20839d3d480c87a07910f0f2bbf2f866d3e6a0dfd6bce0c2617717ca13"} Jan 30 21:40:20 crc kubenswrapper[4751]: I0130 21:40:20.897203 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="ce9fb6c9-b64c-4470-9be6-f8686b59b0f8" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://37d52d20839d3d480c87a07910f0f2bbf2f866d3e6a0dfd6bce0c2617717ca13" gracePeriod=30 Jan 30 21:40:20 crc kubenswrapper[4751]: I0130 21:40:20.949806 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.179790092 podStartE2EDuration="8.949765456s" podCreationTimestamp="2026-01-30 21:40:12 +0000 UTC" firstStartedPulling="2026-01-30 21:40:14.687464413 +0000 UTC m=+1553.433287062" lastFinishedPulling="2026-01-30 21:40:19.457439777 +0000 UTC m=+1558.203262426" observedRunningTime="2026-01-30 21:40:20.895106561 +0000 UTC m=+1559.640929210" watchObservedRunningTime="2026-01-30 21:40:20.949765456 +0000 UTC m=+1559.695588105" Jan 30 21:40:21 crc kubenswrapper[4751]: I0130 21:40:21.010080 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.549916974 podStartE2EDuration="8.01005381s" podCreationTimestamp="2026-01-30 21:40:13 +0000 UTC" firstStartedPulling="2026-01-30 21:40:15.079599545 +0000 UTC m=+1553.825422194" lastFinishedPulling="2026-01-30 21:40:19.539736371 +0000 UTC m=+1558.285559030" observedRunningTime="2026-01-30 21:40:20.918049856 +0000 UTC m=+1559.663872525" watchObservedRunningTime="2026-01-30 21:40:21.01005381 +0000 UTC m=+1559.755876459" Jan 30 21:40:21 crc kubenswrapper[4751]: I0130 21:40:21.018623 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.252949331 podStartE2EDuration="9.018602849s" podCreationTimestamp="2026-01-30 21:40:12 +0000 UTC" firstStartedPulling="2026-01-30 21:40:14.68699659 +0000 UTC m=+1553.432819239" lastFinishedPulling="2026-01-30 21:40:19.452650108 +0000 UTC m=+1558.198472757" observedRunningTime="2026-01-30 21:40:20.933991113 +0000 UTC m=+1559.679813762" watchObservedRunningTime="2026-01-30 21:40:21.018602849 +0000 UTC m=+1559.764425498" Jan 30 21:40:21 crc kubenswrapper[4751]: I0130 21:40:21.912063 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5","Type":"ContainerDied","Data":"4aefbe2f56c7f9dcf869877e8817164ce4745373ce695b6a57d0467b39716da2"} Jan 30 21:40:21 crc kubenswrapper[4751]: I0130 21:40:21.912224 4751 generic.go:334] "Generic (PLEG): container finished" podID="ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" containerID="4aefbe2f56c7f9dcf869877e8817164ce4745373ce695b6a57d0467b39716da2" exitCode=143 Jan 30 21:40:21 crc kubenswrapper[4751]: I0130 21:40:21.921722 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08","Type":"ContainerStarted","Data":"423d5c0e48ebc4ebbb5d2c6df51425116510d1d23af248aa57b6d39b5308dda7"} Jan 30 21:40:21 crc kubenswrapper[4751]: I0130 21:40:21.921762 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08","Type":"ContainerStarted","Data":"d7b613c9fb5a4e06aefbc21a52f8dc19a4606e271e4fe01bbcecc755f41f5ef2"} Jan 30 21:40:22 crc kubenswrapper[4751]: I0130 21:40:22.936407 4751 generic.go:334] "Generic (PLEG): container finished" podID="0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd" containerID="17690b46bb105b4071eb9244efb55112436407df788ac66de199405e58cab561" exitCode=0 Jan 30 21:40:22 crc kubenswrapper[4751]: I0130 21:40:22.936512 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x464h" event={"ID":"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd","Type":"ContainerDied","Data":"17690b46bb105b4071eb9244efb55112436407df788ac66de199405e58cab561"} Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.399410 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-dmqw2"] Jan 30 21:40:23 crc kubenswrapper[4751]: E0130 21:40:23.400127 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f243fc38-73c3-44ef-98b1-8c3086761087" containerName="mariadb-database-create" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.400486 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f243fc38-73c3-44ef-98b1-8c3086761087" containerName="mariadb-database-create" Jan 30 21:40:23 crc kubenswrapper[4751]: E0130 21:40:23.400510 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eb55a4f-933c-4871-b2d4-aed75e1449d7" containerName="mariadb-account-create-update" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.400516 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eb55a4f-933c-4871-b2d4-aed75e1449d7" containerName="mariadb-account-create-update" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.400767 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eb55a4f-933c-4871-b2d4-aed75e1449d7" containerName="mariadb-account-create-update" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.400788 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f243fc38-73c3-44ef-98b1-8c3086761087" containerName="mariadb-database-create" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.401592 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.405439 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-k9tjh" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.405687 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.405802 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.405919 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.418532 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-dmqw2"] Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.518992 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.525916 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-scripts\") pod \"aodh-db-sync-dmqw2\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.525962 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-config-data\") pod \"aodh-db-sync-dmqw2\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.526050 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-combined-ca-bundle\") pod \"aodh-db-sync-dmqw2\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.526229 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttrf8\" (UniqueName: \"kubernetes.io/projected/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-kube-api-access-ttrf8\") pod \"aodh-db-sync-dmqw2\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.627897 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-combined-ca-bundle\") pod \"aodh-db-sync-dmqw2\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.628016 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttrf8\" (UniqueName: \"kubernetes.io/projected/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-kube-api-access-ttrf8\") pod \"aodh-db-sync-dmqw2\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.628075 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-scripts\") pod \"aodh-db-sync-dmqw2\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.628102 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-config-data\") pod \"aodh-db-sync-dmqw2\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.634144 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-combined-ca-bundle\") pod \"aodh-db-sync-dmqw2\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.634916 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-config-data\") pod \"aodh-db-sync-dmqw2\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.635422 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-scripts\") pod \"aodh-db-sync-dmqw2\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.651815 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttrf8\" (UniqueName: \"kubernetes.io/projected/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-kube-api-access-ttrf8\") pod \"aodh-db-sync-dmqw2\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.677528 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.677576 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.725853 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.785440 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.785526 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.807664 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.831008 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.831047 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.912251 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.936061 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-config-data\") pod \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.936224 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9njj\" (UniqueName: \"kubernetes.io/projected/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-kube-api-access-m9njj\") pod \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.936313 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-config-data-custom\") pod \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.936404 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-combined-ca-bundle\") pod \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.954244 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-kube-api-access-m9njj" (OuterVolumeSpecName: "kube-api-access-m9njj") pod "b8bf4d1e-d4c4-419c-b85b-5553a4996b75" (UID: "b8bf4d1e-d4c4-419c-b85b-5553a4996b75"). InnerVolumeSpecName "kube-api-access-m9njj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.961966 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b8bf4d1e-d4c4-419c-b85b-5553a4996b75" (UID: "b8bf4d1e-d4c4-419c-b85b-5553a4996b75"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.994504 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:40:23 crc kubenswrapper[4751]: E0130 21:40:23.995107 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.018496 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8bf4d1e-d4c4-419c-b85b-5553a4996b75" (UID: "b8bf4d1e-d4c4-419c-b85b-5553a4996b75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.028767 4751 generic.go:334] "Generic (PLEG): container finished" podID="b8bf4d1e-d4c4-419c-b85b-5553a4996b75" containerID="2904e4316cbba4a0972ec09c8b2f6ced0ccda32823e6c191ae184b163790800d" exitCode=137 Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.028998 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.035306 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" event={"ID":"b8bf4d1e-d4c4-419c-b85b-5553a4996b75","Type":"ContainerDied","Data":"2904e4316cbba4a0972ec09c8b2f6ced0ccda32823e6c191ae184b163790800d"} Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.035429 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" event={"ID":"b8bf4d1e-d4c4-419c-b85b-5553a4996b75","Type":"ContainerDied","Data":"bd07fda2d7a8e027afec3c510cd01ca77e8506c7870cbb605108fca39849bb7a"} Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.035481 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.035622 4751 scope.go:117] "RemoveContainer" containerID="2904e4316cbba4a0972ec09c8b2f6ced0ccda32823e6c191ae184b163790800d" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.039354 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9njj\" (UniqueName: \"kubernetes.io/projected/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-kube-api-access-m9njj\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.039612 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.039622 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.059110 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-g645r"] Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.059410 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688b9f5b49-g645r" podUID="5c918a5e-396e-4f0a-a68e-babcb03f2f4f" containerName="dnsmasq-dns" containerID="cri-o://eb0f3e62504ef376cb231daedb59d05d53a0fc2c2b6b6606cc3a08b14b2931e8" gracePeriod=10 Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.120102 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-config-data" (OuterVolumeSpecName: "config-data") pod "b8bf4d1e-d4c4-419c-b85b-5553a4996b75" (UID: "b8bf4d1e-d4c4-419c-b85b-5553a4996b75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.141397 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.142519 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.207605 4751 scope.go:117] "RemoveContainer" containerID="2904e4316cbba4a0972ec09c8b2f6ced0ccda32823e6c191ae184b163790800d" Jan 30 21:40:24 crc kubenswrapper[4751]: E0130 21:40:24.208518 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2904e4316cbba4a0972ec09c8b2f6ced0ccda32823e6c191ae184b163790800d\": container with ID starting with 2904e4316cbba4a0972ec09c8b2f6ced0ccda32823e6c191ae184b163790800d not found: ID does not exist" containerID="2904e4316cbba4a0972ec09c8b2f6ced0ccda32823e6c191ae184b163790800d" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.208568 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2904e4316cbba4a0972ec09c8b2f6ced0ccda32823e6c191ae184b163790800d"} err="failed to get container status \"2904e4316cbba4a0972ec09c8b2f6ced0ccda32823e6c191ae184b163790800d\": rpc error: code = NotFound desc = could not find container \"2904e4316cbba4a0972ec09c8b2f6ced0ccda32823e6c191ae184b163790800d\": container with ID starting with 2904e4316cbba4a0972ec09c8b2f6ced0ccda32823e6c191ae184b163790800d not found: ID does not exist" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.370673 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6b5fd5d955-5ksqz"] Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.390760 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6b5fd5d955-5ksqz"] Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.463863 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-dmqw2"] Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.718769 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.247:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.731037 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.759778 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.247:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.860115 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-config-data\") pod \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.860343 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-scripts\") pod \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.860388 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr8ls\" (UniqueName: \"kubernetes.io/projected/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-kube-api-access-gr8ls\") pod \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.860416 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-combined-ca-bundle\") pod \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.870757 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-kube-api-access-gr8ls" (OuterVolumeSpecName: "kube-api-access-gr8ls") pod "0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd" (UID: "0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd"). InnerVolumeSpecName "kube-api-access-gr8ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.872657 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-scripts" (OuterVolumeSpecName: "scripts") pod "0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd" (UID: "0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.900529 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd" (UID: "0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.911390 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-config-data" (OuterVolumeSpecName: "config-data") pod "0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd" (UID: "0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.962757 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.962786 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.962795 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr8ls\" (UniqueName: \"kubernetes.io/projected/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-kube-api-access-gr8ls\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.962805 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.046529 4751 generic.go:334] "Generic (PLEG): container finished" podID="5c918a5e-396e-4f0a-a68e-babcb03f2f4f" containerID="eb0f3e62504ef376cb231daedb59d05d53a0fc2c2b6b6606cc3a08b14b2931e8" exitCode=0 Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.046603 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-g645r" event={"ID":"5c918a5e-396e-4f0a-a68e-babcb03f2f4f","Type":"ContainerDied","Data":"eb0f3e62504ef376cb231daedb59d05d53a0fc2c2b6b6606cc3a08b14b2931e8"} Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.052201 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dmqw2" event={"ID":"da95a3dd-69cf-4a27-af6c-1ac5b262c00a","Type":"ContainerStarted","Data":"e00a45ef9c5e0a4a179301e799c6bf54f6ff0d30e7efc8df176d7b1e8a8eca4c"} Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.085699 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08","Type":"ContainerStarted","Data":"21af121eb48e81f26f40336062b858e9071037b9c373a72bd1d118b16d6241fb"} Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.085920 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.125742 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.334642158 podStartE2EDuration="11.1257211s" podCreationTimestamp="2026-01-30 21:40:14 +0000 UTC" firstStartedPulling="2026-01-30 21:40:18.740416373 +0000 UTC m=+1557.486239022" lastFinishedPulling="2026-01-30 21:40:24.531495305 +0000 UTC m=+1563.277317964" observedRunningTime="2026-01-30 21:40:25.111258242 +0000 UTC m=+1563.857080891" watchObservedRunningTime="2026-01-30 21:40:25.1257211 +0000 UTC m=+1563.871543739" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.167356 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.167636 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x464h" event={"ID":"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd","Type":"ContainerDied","Data":"e386bc63e4c6fd0ed66a429bd207a6d77378c3bd08ff0d82d176f966ccb0e22d"} Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.167756 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e386bc63e4c6fd0ed66a429bd207a6d77378c3bd08ff0d82d176f966ccb0e22d" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.201184 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.201416 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" containerName="nova-api-log" containerID="cri-o://a12f454f2fa43055f0701b691728e808dabdc8866c986b81cbe1626dd059cfd8" gracePeriod=30 Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.201539 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" containerName="nova-api-api" containerID="cri-o://c26392c8774fa05fa8bed1df4d07f08e6743e53be396dfdb4a19f20047df588d" gracePeriod=30 Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.349779 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.464458 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.512871 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6c27\" (UniqueName: \"kubernetes.io/projected/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-kube-api-access-j6c27\") pod \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.512932 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-config\") pod \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.513042 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-dns-svc\") pod \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.513119 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-ovsdbserver-nb\") pod \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.513147 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-ovsdbserver-sb\") pod \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.513282 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-dns-swift-storage-0\") pod \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.523228 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-kube-api-access-j6c27" (OuterVolumeSpecName: "kube-api-access-j6c27") pod "5c918a5e-396e-4f0a-a68e-babcb03f2f4f" (UID: "5c918a5e-396e-4f0a-a68e-babcb03f2f4f"). InnerVolumeSpecName "kube-api-access-j6c27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.587202 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5c918a5e-396e-4f0a-a68e-babcb03f2f4f" (UID: "5c918a5e-396e-4f0a-a68e-babcb03f2f4f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.601131 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5c918a5e-396e-4f0a-a68e-babcb03f2f4f" (UID: "5c918a5e-396e-4f0a-a68e-babcb03f2f4f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.617414 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.617462 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.617478 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6c27\" (UniqueName: \"kubernetes.io/projected/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-kube-api-access-j6c27\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.631765 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5c918a5e-396e-4f0a-a68e-babcb03f2f4f" (UID: "5c918a5e-396e-4f0a-a68e-babcb03f2f4f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.644552 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-config" (OuterVolumeSpecName: "config") pod "5c918a5e-396e-4f0a-a68e-babcb03f2f4f" (UID: "5c918a5e-396e-4f0a-a68e-babcb03f2f4f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.645717 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5c918a5e-396e-4f0a-a68e-babcb03f2f4f" (UID: "5c918a5e-396e-4f0a-a68e-babcb03f2f4f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.721118 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.722575 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.722656 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.994755 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8bf4d1e-d4c4-419c-b85b-5553a4996b75" path="/var/lib/kubelet/pods/b8bf4d1e-d4c4-419c-b85b-5553a4996b75/volumes" Jan 30 21:40:26 crc kubenswrapper[4751]: I0130 21:40:26.201013 4751 generic.go:334] "Generic (PLEG): container finished" podID="ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" containerID="a12f454f2fa43055f0701b691728e808dabdc8866c986b81cbe1626dd059cfd8" exitCode=143 Jan 30 21:40:26 crc kubenswrapper[4751]: I0130 21:40:26.201081 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2","Type":"ContainerDied","Data":"a12f454f2fa43055f0701b691728e808dabdc8866c986b81cbe1626dd059cfd8"} Jan 30 21:40:26 crc kubenswrapper[4751]: I0130 21:40:26.216387 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:40:26 crc kubenswrapper[4751]: I0130 21:40:26.217153 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-g645r" event={"ID":"5c918a5e-396e-4f0a-a68e-babcb03f2f4f","Type":"ContainerDied","Data":"170e124c674cb6797f80484f6f460a674ca21330ea1b870e78baeac2120834e0"} Jan 30 21:40:26 crc kubenswrapper[4751]: I0130 21:40:26.217188 4751 scope.go:117] "RemoveContainer" containerID="eb0f3e62504ef376cb231daedb59d05d53a0fc2c2b6b6606cc3a08b14b2931e8" Jan 30 21:40:26 crc kubenswrapper[4751]: I0130 21:40:26.217359 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ac78e815-d61d-4cb5-ae78-e2ed6c7478e1" containerName="nova-scheduler-scheduler" containerID="cri-o://dc31a52f5646a180bf7d41d4f12f928e7c63335be084922d3c985b8fa786c23e" gracePeriod=30 Jan 30 21:40:26 crc kubenswrapper[4751]: I0130 21:40:26.263380 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-g645r"] Jan 30 21:40:26 crc kubenswrapper[4751]: I0130 21:40:26.264482 4751 scope.go:117] "RemoveContainer" containerID="19625e5a680f498754e1957e0d693d69d11c0c30e1b3f7eadc11af86a948548e" Jan 30 21:40:26 crc kubenswrapper[4751]: I0130 21:40:26.273046 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-g645r"] Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.234277 4751 generic.go:334] "Generic (PLEG): container finished" podID="e675971e-ba0e-4630-bc1b-bdf47a433dd7" containerID="15b7f342e1abdf85738c2010d50dd3bc6a8ad893d7ecab47d753b5a1b032305d" exitCode=0 Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.234666 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-php6q" event={"ID":"e675971e-ba0e-4630-bc1b-bdf47a433dd7","Type":"ContainerDied","Data":"15b7f342e1abdf85738c2010d50dd3bc6a8ad893d7ecab47d753b5a1b032305d"} Jan 30 21:40:27 crc kubenswrapper[4751]: E0130 21:40:27.522064 4751 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac78e815_d61d_4cb5_ae78_e2ed6c7478e1.slice/crio-dc31a52f5646a180bf7d41d4f12f928e7c63335be084922d3c985b8fa786c23e.scope\": RecentStats: unable to find data in memory cache]" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.871520 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-28zk5"] Jan 30 21:40:27 crc kubenswrapper[4751]: E0130 21:40:27.872446 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c918a5e-396e-4f0a-a68e-babcb03f2f4f" containerName="init" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.872473 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c918a5e-396e-4f0a-a68e-babcb03f2f4f" containerName="init" Jan 30 21:40:27 crc kubenswrapper[4751]: E0130 21:40:27.872509 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c918a5e-396e-4f0a-a68e-babcb03f2f4f" containerName="dnsmasq-dns" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.872518 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c918a5e-396e-4f0a-a68e-babcb03f2f4f" containerName="dnsmasq-dns" Jan 30 21:40:27 crc kubenswrapper[4751]: E0130 21:40:27.872557 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd" containerName="nova-manage" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.872565 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd" containerName="nova-manage" Jan 30 21:40:27 crc kubenswrapper[4751]: E0130 21:40:27.872591 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8bf4d1e-d4c4-419c-b85b-5553a4996b75" containerName="heat-cfnapi" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.872599 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8bf4d1e-d4c4-419c-b85b-5553a4996b75" containerName="heat-cfnapi" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.872848 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd" containerName="nova-manage" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.872911 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8bf4d1e-d4c4-419c-b85b-5553a4996b75" containerName="heat-cfnapi" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.872940 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c918a5e-396e-4f0a-a68e-babcb03f2f4f" containerName="dnsmasq-dns" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.877193 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.883227 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-28zk5"] Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.889262 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/488bc1bc-a729-42ea-8a7c-20ace387607e-catalog-content\") pod \"redhat-marketplace-28zk5\" (UID: \"488bc1bc-a729-42ea-8a7c-20ace387607e\") " pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.889431 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/488bc1bc-a729-42ea-8a7c-20ace387607e-utilities\") pod \"redhat-marketplace-28zk5\" (UID: \"488bc1bc-a729-42ea-8a7c-20ace387607e\") " pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.889487 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpw44\" (UniqueName: \"kubernetes.io/projected/488bc1bc-a729-42ea-8a7c-20ace387607e-kube-api-access-fpw44\") pod \"redhat-marketplace-28zk5\" (UID: \"488bc1bc-a729-42ea-8a7c-20ace387607e\") " pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.987835 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c918a5e-396e-4f0a-a68e-babcb03f2f4f" path="/var/lib/kubelet/pods/5c918a5e-396e-4f0a-a68e-babcb03f2f4f/volumes" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.990912 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/488bc1bc-a729-42ea-8a7c-20ace387607e-catalog-content\") pod \"redhat-marketplace-28zk5\" (UID: \"488bc1bc-a729-42ea-8a7c-20ace387607e\") " pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.991054 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/488bc1bc-a729-42ea-8a7c-20ace387607e-utilities\") pod \"redhat-marketplace-28zk5\" (UID: \"488bc1bc-a729-42ea-8a7c-20ace387607e\") " pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.991110 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpw44\" (UniqueName: \"kubernetes.io/projected/488bc1bc-a729-42ea-8a7c-20ace387607e-kube-api-access-fpw44\") pod \"redhat-marketplace-28zk5\" (UID: \"488bc1bc-a729-42ea-8a7c-20ace387607e\") " pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.991523 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/488bc1bc-a729-42ea-8a7c-20ace387607e-catalog-content\") pod \"redhat-marketplace-28zk5\" (UID: \"488bc1bc-a729-42ea-8a7c-20ace387607e\") " pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.991845 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/488bc1bc-a729-42ea-8a7c-20ace387607e-utilities\") pod \"redhat-marketplace-28zk5\" (UID: \"488bc1bc-a729-42ea-8a7c-20ace387607e\") " pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:28 crc kubenswrapper[4751]: I0130 21:40:28.028833 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpw44\" (UniqueName: \"kubernetes.io/projected/488bc1bc-a729-42ea-8a7c-20ace387607e-kube-api-access-fpw44\") pod \"redhat-marketplace-28zk5\" (UID: \"488bc1bc-a729-42ea-8a7c-20ace387607e\") " pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:28 crc kubenswrapper[4751]: I0130 21:40:28.212410 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:28 crc kubenswrapper[4751]: I0130 21:40:28.249247 4751 generic.go:334] "Generic (PLEG): container finished" podID="ac78e815-d61d-4cb5-ae78-e2ed6c7478e1" containerID="dc31a52f5646a180bf7d41d4f12f928e7c63335be084922d3c985b8fa786c23e" exitCode=0 Jan 30 21:40:28 crc kubenswrapper[4751]: I0130 21:40:28.249461 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1","Type":"ContainerDied","Data":"dc31a52f5646a180bf7d41d4f12f928e7c63335be084922d3c985b8fa786c23e"} Jan 30 21:40:28 crc kubenswrapper[4751]: E0130 21:40:28.830589 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dc31a52f5646a180bf7d41d4f12f928e7c63335be084922d3c985b8fa786c23e is running failed: container process not found" containerID="dc31a52f5646a180bf7d41d4f12f928e7c63335be084922d3c985b8fa786c23e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 21:40:28 crc kubenswrapper[4751]: E0130 21:40:28.831187 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dc31a52f5646a180bf7d41d4f12f928e7c63335be084922d3c985b8fa786c23e is running failed: container process not found" containerID="dc31a52f5646a180bf7d41d4f12f928e7c63335be084922d3c985b8fa786c23e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 21:40:28 crc kubenswrapper[4751]: E0130 21:40:28.831611 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dc31a52f5646a180bf7d41d4f12f928e7c63335be084922d3c985b8fa786c23e is running failed: container process not found" containerID="dc31a52f5646a180bf7d41d4f12f928e7c63335be084922d3c985b8fa786c23e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 21:40:28 crc kubenswrapper[4751]: E0130 21:40:28.831684 4751 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dc31a52f5646a180bf7d41d4f12f928e7c63335be084922d3c985b8fa786c23e is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="ac78e815-d61d-4cb5-ae78-e2ed6c7478e1" containerName="nova-scheduler-scheduler" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.280491 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-php6q" event={"ID":"e675971e-ba0e-4630-bc1b-bdf47a433dd7","Type":"ContainerDied","Data":"86f75d724ed17fcc60b6ecec83042af7088c44a17f4847501a5109d876e1517b"} Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.280805 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86f75d724ed17fcc60b6ecec83042af7088c44a17f4847501a5109d876e1517b" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.283140 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1","Type":"ContainerDied","Data":"6634a7ec1926462565ee356b01be6be019e25607c220ad7ffda5036ec6de3853"} Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.283163 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6634a7ec1926462565ee356b01be6be019e25607c220ad7ffda5036ec6de3853" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.511453 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.521036 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.653964 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-config-data\") pod \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\" (UID: \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\") " Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.654021 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-combined-ca-bundle\") pod \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.654157 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvz4f\" (UniqueName: \"kubernetes.io/projected/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-kube-api-access-fvz4f\") pod \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\" (UID: \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\") " Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.654213 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-config-data\") pod \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.654237 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-scripts\") pod \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.654278 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-combined-ca-bundle\") pod \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\" (UID: \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\") " Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.654316 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz5pm\" (UniqueName: \"kubernetes.io/projected/e675971e-ba0e-4630-bc1b-bdf47a433dd7-kube-api-access-jz5pm\") pod \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.659571 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-scripts" (OuterVolumeSpecName: "scripts") pod "e675971e-ba0e-4630-bc1b-bdf47a433dd7" (UID: "e675971e-ba0e-4630-bc1b-bdf47a433dd7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.660059 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-kube-api-access-fvz4f" (OuterVolumeSpecName: "kube-api-access-fvz4f") pod "ac78e815-d61d-4cb5-ae78-e2ed6c7478e1" (UID: "ac78e815-d61d-4cb5-ae78-e2ed6c7478e1"). InnerVolumeSpecName "kube-api-access-fvz4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.660194 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e675971e-ba0e-4630-bc1b-bdf47a433dd7-kube-api-access-jz5pm" (OuterVolumeSpecName: "kube-api-access-jz5pm") pod "e675971e-ba0e-4630-bc1b-bdf47a433dd7" (UID: "e675971e-ba0e-4630-bc1b-bdf47a433dd7"). InnerVolumeSpecName "kube-api-access-jz5pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.713725 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-config-data" (OuterVolumeSpecName: "config-data") pod "ac78e815-d61d-4cb5-ae78-e2ed6c7478e1" (UID: "ac78e815-d61d-4cb5-ae78-e2ed6c7478e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.715027 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac78e815-d61d-4cb5-ae78-e2ed6c7478e1" (UID: "ac78e815-d61d-4cb5-ae78-e2ed6c7478e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.721995 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-config-data" (OuterVolumeSpecName: "config-data") pod "e675971e-ba0e-4630-bc1b-bdf47a433dd7" (UID: "e675971e-ba0e-4630-bc1b-bdf47a433dd7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.742083 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e675971e-ba0e-4630-bc1b-bdf47a433dd7" (UID: "e675971e-ba0e-4630-bc1b-bdf47a433dd7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.763899 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvz4f\" (UniqueName: \"kubernetes.io/projected/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-kube-api-access-fvz4f\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.763934 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.763945 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.763953 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.763962 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz5pm\" (UniqueName: \"kubernetes.io/projected/e675971e-ba0e-4630-bc1b-bdf47a433dd7-kube-api-access-jz5pm\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.763970 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.763979 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.797064 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-28zk5"] Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.227908 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.297076 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dmqw2" event={"ID":"da95a3dd-69cf-4a27-af6c-1ac5b262c00a","Type":"ContainerStarted","Data":"0309515ef606ff55b4a18e80ad5013912740e27a1539e1146826375f54a2b553"} Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.298711 4751 generic.go:334] "Generic (PLEG): container finished" podID="ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" containerID="c26392c8774fa05fa8bed1df4d07f08e6743e53be396dfdb4a19f20047df588d" exitCode=0 Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.298763 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.298831 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2","Type":"ContainerDied","Data":"c26392c8774fa05fa8bed1df4d07f08e6743e53be396dfdb4a19f20047df588d"} Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.298915 4751 scope.go:117] "RemoveContainer" containerID="c26392c8774fa05fa8bed1df4d07f08e6743e53be396dfdb4a19f20047df588d" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.299082 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2","Type":"ContainerDied","Data":"3024c16ad9c3581d8359e24b5329d3dd5dbc599abb3804944a084a5026b4bea0"} Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.300318 4751 generic.go:334] "Generic (PLEG): container finished" podID="488bc1bc-a729-42ea-8a7c-20ace387607e" containerID="139af0fd70a0e1831f25361c5ad49958ccff917c006c30b5f26d72eac8b63bd9" exitCode=0 Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.300429 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.301249 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28zk5" event={"ID":"488bc1bc-a729-42ea-8a7c-20ace387607e","Type":"ContainerDied","Data":"139af0fd70a0e1831f25361c5ad49958ccff917c006c30b5f26d72eac8b63bd9"} Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.301273 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28zk5" event={"ID":"488bc1bc-a729-42ea-8a7c-20ace387607e","Type":"ContainerStarted","Data":"7804348a0fdd5e612f2972159acdb604e779f2dc22bcf28432d660b17f23f501"} Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.301304 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.331972 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-dmqw2" podStartSLOduration=2.545221765 podStartE2EDuration="8.331949511s" podCreationTimestamp="2026-01-30 21:40:23 +0000 UTC" firstStartedPulling="2026-01-30 21:40:24.544267677 +0000 UTC m=+1563.290090326" lastFinishedPulling="2026-01-30 21:40:30.330995423 +0000 UTC m=+1569.076818072" observedRunningTime="2026-01-30 21:40:31.310498566 +0000 UTC m=+1570.056321215" watchObservedRunningTime="2026-01-30 21:40:31.331949511 +0000 UTC m=+1570.077772160" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.369202 4751 scope.go:117] "RemoveContainer" containerID="a12f454f2fa43055f0701b691728e808dabdc8866c986b81cbe1626dd059cfd8" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.378115 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsttj\" (UniqueName: \"kubernetes.io/projected/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-kube-api-access-dsttj\") pod \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.378476 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-logs\") pod \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.378544 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-config-data\") pod \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.378585 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-combined-ca-bundle\") pod \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.379654 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-logs" (OuterVolumeSpecName: "logs") pod "ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" (UID: "ecc4c40f-f619-45e9-9e3d-baf3a3440ca2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.380129 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.395944 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.398276 4751 scope.go:117] "RemoveContainer" containerID="c26392c8774fa05fa8bed1df4d07f08e6743e53be396dfdb4a19f20047df588d" Jan 30 21:40:31 crc kubenswrapper[4751]: E0130 21:40:31.403208 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c26392c8774fa05fa8bed1df4d07f08e6743e53be396dfdb4a19f20047df588d\": container with ID starting with c26392c8774fa05fa8bed1df4d07f08e6743e53be396dfdb4a19f20047df588d not found: ID does not exist" containerID="c26392c8774fa05fa8bed1df4d07f08e6743e53be396dfdb4a19f20047df588d" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.403254 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c26392c8774fa05fa8bed1df4d07f08e6743e53be396dfdb4a19f20047df588d"} err="failed to get container status \"c26392c8774fa05fa8bed1df4d07f08e6743e53be396dfdb4a19f20047df588d\": rpc error: code = NotFound desc = could not find container \"c26392c8774fa05fa8bed1df4d07f08e6743e53be396dfdb4a19f20047df588d\": container with ID starting with c26392c8774fa05fa8bed1df4d07f08e6743e53be396dfdb4a19f20047df588d not found: ID does not exist" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.403282 4751 scope.go:117] "RemoveContainer" containerID="a12f454f2fa43055f0701b691728e808dabdc8866c986b81cbe1626dd059cfd8" Jan 30 21:40:31 crc kubenswrapper[4751]: E0130 21:40:31.403533 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a12f454f2fa43055f0701b691728e808dabdc8866c986b81cbe1626dd059cfd8\": container with ID starting with a12f454f2fa43055f0701b691728e808dabdc8866c986b81cbe1626dd059cfd8 not found: ID does not exist" containerID="a12f454f2fa43055f0701b691728e808dabdc8866c986b81cbe1626dd059cfd8" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.403576 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a12f454f2fa43055f0701b691728e808dabdc8866c986b81cbe1626dd059cfd8"} err="failed to get container status \"a12f454f2fa43055f0701b691728e808dabdc8866c986b81cbe1626dd059cfd8\": rpc error: code = NotFound desc = could not find container \"a12f454f2fa43055f0701b691728e808dabdc8866c986b81cbe1626dd059cfd8\": container with ID starting with a12f454f2fa43055f0701b691728e808dabdc8866c986b81cbe1626dd059cfd8 not found: ID does not exist" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.409118 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.412316 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-kube-api-access-dsttj" (OuterVolumeSpecName: "kube-api-access-dsttj") pod "ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" (UID: "ecc4c40f-f619-45e9-9e3d-baf3a3440ca2"). InnerVolumeSpecName "kube-api-access-dsttj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.439898 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:40:31 crc kubenswrapper[4751]: E0130 21:40:31.440458 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" containerName="nova-api-api" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.440481 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" containerName="nova-api-api" Jan 30 21:40:31 crc kubenswrapper[4751]: E0130 21:40:31.440494 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" containerName="nova-api-log" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.440502 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" containerName="nova-api-log" Jan 30 21:40:31 crc kubenswrapper[4751]: E0130 21:40:31.440536 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e675971e-ba0e-4630-bc1b-bdf47a433dd7" containerName="nova-cell1-conductor-db-sync" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.440545 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e675971e-ba0e-4630-bc1b-bdf47a433dd7" containerName="nova-cell1-conductor-db-sync" Jan 30 21:40:31 crc kubenswrapper[4751]: E0130 21:40:31.440590 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac78e815-d61d-4cb5-ae78-e2ed6c7478e1" containerName="nova-scheduler-scheduler" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.440599 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac78e815-d61d-4cb5-ae78-e2ed6c7478e1" containerName="nova-scheduler-scheduler" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.440846 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac78e815-d61d-4cb5-ae78-e2ed6c7478e1" containerName="nova-scheduler-scheduler" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.440871 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" containerName="nova-api-log" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.440907 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e675971e-ba0e-4630-bc1b-bdf47a433dd7" containerName="nova-cell1-conductor-db-sync" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.440919 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" containerName="nova-api-api" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.452922 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.453049 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.458475 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-config-data" (OuterVolumeSpecName: "config-data") pod "ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" (UID: "ecc4c40f-f619-45e9-9e3d-baf3a3440ca2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.461766 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.483526 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsttj\" (UniqueName: \"kubernetes.io/projected/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-kube-api-access-dsttj\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.483552 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.502530 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" (UID: "ecc4c40f-f619-45e9-9e3d-baf3a3440ca2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.585738 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.585792 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-config-data\") pod \"nova-scheduler-0\" (UID: \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.585913 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5sm5\" (UniqueName: \"kubernetes.io/projected/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-kube-api-access-k5sm5\") pod \"nova-scheduler-0\" (UID: \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.586054 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.684850 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.698830 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.699850 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5sm5\" (UniqueName: \"kubernetes.io/projected/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-kube-api-access-k5sm5\") pod \"nova-scheduler-0\" (UID: \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.700197 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.700251 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-config-data\") pod \"nova-scheduler-0\" (UID: \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.716080 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.725238 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.752143 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5sm5\" (UniqueName: \"kubernetes.io/projected/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-kube-api-access-k5sm5\") pod \"nova-scheduler-0\" (UID: \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.755273 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-config-data\") pod \"nova-scheduler-0\" (UID: \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.802740 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1153d5-e025-439d-9799-8bf38014a585-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6c1153d5-e025-439d-9799-8bf38014a585\") " pod="openstack/nova-cell1-conductor-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.802798 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1153d5-e025-439d-9799-8bf38014a585-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6c1153d5-e025-439d-9799-8bf38014a585\") " pod="openstack/nova-cell1-conductor-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.802865 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xrzd\" (UniqueName: \"kubernetes.io/projected/6c1153d5-e025-439d-9799-8bf38014a585-kube-api-access-6xrzd\") pod \"nova-cell1-conductor-0\" (UID: \"6c1153d5-e025-439d-9799-8bf38014a585\") " pod="openstack/nova-cell1-conductor-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.805185 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.835462 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.848778 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.861614 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.866222 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.871041 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.876319 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.909721 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1153d5-e025-439d-9799-8bf38014a585-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6c1153d5-e025-439d-9799-8bf38014a585\") " pod="openstack/nova-cell1-conductor-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.909836 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1153d5-e025-439d-9799-8bf38014a585-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6c1153d5-e025-439d-9799-8bf38014a585\") " pod="openstack/nova-cell1-conductor-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.910455 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xrzd\" (UniqueName: \"kubernetes.io/projected/6c1153d5-e025-439d-9799-8bf38014a585-kube-api-access-6xrzd\") pod \"nova-cell1-conductor-0\" (UID: \"6c1153d5-e025-439d-9799-8bf38014a585\") " pod="openstack/nova-cell1-conductor-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.918286 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1153d5-e025-439d-9799-8bf38014a585-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6c1153d5-e025-439d-9799-8bf38014a585\") " pod="openstack/nova-cell1-conductor-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.921044 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1153d5-e025-439d-9799-8bf38014a585-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6c1153d5-e025-439d-9799-8bf38014a585\") " pod="openstack/nova-cell1-conductor-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.926291 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.927805 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xrzd\" (UniqueName: \"kubernetes.io/projected/6c1153d5-e025-439d-9799-8bf38014a585-kube-api-access-6xrzd\") pod \"nova-cell1-conductor-0\" (UID: \"6c1153d5-e025-439d-9799-8bf38014a585\") " pod="openstack/nova-cell1-conductor-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.996308 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac78e815-d61d-4cb5-ae78-e2ed6c7478e1" path="/var/lib/kubelet/pods/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1/volumes" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.996923 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" path="/var/lib/kubelet/pods/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2/volumes" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.014721 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e039239f-9678-4ac5-bbd9-31120a7e569a-logs\") pod \"nova-api-0\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " pod="openstack/nova-api-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.014949 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e039239f-9678-4ac5-bbd9-31120a7e569a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " pod="openstack/nova-api-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.015078 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e039239f-9678-4ac5-bbd9-31120a7e569a-config-data\") pod \"nova-api-0\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " pod="openstack/nova-api-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.016307 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndzzh\" (UniqueName: \"kubernetes.io/projected/e039239f-9678-4ac5-bbd9-31120a7e569a-kube-api-access-ndzzh\") pod \"nova-api-0\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " pod="openstack/nova-api-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.069799 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.121658 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e039239f-9678-4ac5-bbd9-31120a7e569a-logs\") pod \"nova-api-0\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " pod="openstack/nova-api-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.121809 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e039239f-9678-4ac5-bbd9-31120a7e569a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " pod="openstack/nova-api-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.121862 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e039239f-9678-4ac5-bbd9-31120a7e569a-config-data\") pod \"nova-api-0\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " pod="openstack/nova-api-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.122057 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndzzh\" (UniqueName: \"kubernetes.io/projected/e039239f-9678-4ac5-bbd9-31120a7e569a-kube-api-access-ndzzh\") pod \"nova-api-0\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " pod="openstack/nova-api-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.123165 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e039239f-9678-4ac5-bbd9-31120a7e569a-logs\") pod \"nova-api-0\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " pod="openstack/nova-api-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.128414 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e039239f-9678-4ac5-bbd9-31120a7e569a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " pod="openstack/nova-api-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.129418 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e039239f-9678-4ac5-bbd9-31120a7e569a-config-data\") pod \"nova-api-0\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " pod="openstack/nova-api-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.140070 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndzzh\" (UniqueName: \"kubernetes.io/projected/e039239f-9678-4ac5-bbd9-31120a7e569a-kube-api-access-ndzzh\") pod \"nova-api-0\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " pod="openstack/nova-api-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.201206 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.392664 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.638231 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.764420 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:32 crc kubenswrapper[4751]: W0130 21:40:32.766498 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode039239f_9678_4ac5_bbd9_31120a7e569a.slice/crio-6f2d3e4731c7bd546c8fba7d19f7cf9cd29af48c0d46d6409c89f580aefa0bd4 WatchSource:0}: Error finding container 6f2d3e4731c7bd546c8fba7d19f7cf9cd29af48c0d46d6409c89f580aefa0bd4: Status 404 returned error can't find the container with id 6f2d3e4731c7bd546c8fba7d19f7cf9cd29af48c0d46d6409c89f580aefa0bd4 Jan 30 21:40:33 crc kubenswrapper[4751]: I0130 21:40:33.335931 4751 generic.go:334] "Generic (PLEG): container finished" podID="488bc1bc-a729-42ea-8a7c-20ace387607e" containerID="86a72211e7e7fc26353c2a9a47f3913d1f47004de566a99860b994059b56da9b" exitCode=0 Jan 30 21:40:33 crc kubenswrapper[4751]: I0130 21:40:33.336004 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28zk5" event={"ID":"488bc1bc-a729-42ea-8a7c-20ace387607e","Type":"ContainerDied","Data":"86a72211e7e7fc26353c2a9a47f3913d1f47004de566a99860b994059b56da9b"} Jan 30 21:40:33 crc kubenswrapper[4751]: I0130 21:40:33.338509 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3","Type":"ContainerStarted","Data":"71e518ce4b47249b7fe655e27ae7ba7dfbf14678e631713c379de27311773334"} Jan 30 21:40:33 crc kubenswrapper[4751]: I0130 21:40:33.338567 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3","Type":"ContainerStarted","Data":"48f5ec0c53f7e04ad3c659a4b9e04d6883529b029a3420700108431ca3b92a48"} Jan 30 21:40:33 crc kubenswrapper[4751]: I0130 21:40:33.340902 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e039239f-9678-4ac5-bbd9-31120a7e569a","Type":"ContainerStarted","Data":"512781e56c77850103d27194e2aa5d14e00bd229876195f8a49389ffbb66fbb8"} Jan 30 21:40:33 crc kubenswrapper[4751]: I0130 21:40:33.340926 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e039239f-9678-4ac5-bbd9-31120a7e569a","Type":"ContainerStarted","Data":"8ad21c2ddc7303115b4968bbab81557f089623a21c2511f765a86b5c7171eebe"} Jan 30 21:40:33 crc kubenswrapper[4751]: I0130 21:40:33.340936 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e039239f-9678-4ac5-bbd9-31120a7e569a","Type":"ContainerStarted","Data":"6f2d3e4731c7bd546c8fba7d19f7cf9cd29af48c0d46d6409c89f580aefa0bd4"} Jan 30 21:40:33 crc kubenswrapper[4751]: I0130 21:40:33.342907 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6c1153d5-e025-439d-9799-8bf38014a585","Type":"ContainerStarted","Data":"31909a59565029413bfc6272ccab934b2e947a948ea6c7ab91a11e78c7ca024a"} Jan 30 21:40:33 crc kubenswrapper[4751]: I0130 21:40:33.342939 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6c1153d5-e025-439d-9799-8bf38014a585","Type":"ContainerStarted","Data":"49ad8244bc99944ded6c99263f0addd92985672a9d9e6f6ae9d39addce8e392e"} Jan 30 21:40:33 crc kubenswrapper[4751]: I0130 21:40:33.343588 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 21:40:33 crc kubenswrapper[4751]: I0130 21:40:33.400533 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.4005156530000002 podStartE2EDuration="2.400515653s" podCreationTimestamp="2026-01-30 21:40:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:40:33.392055546 +0000 UTC m=+1572.137878195" watchObservedRunningTime="2026-01-30 21:40:33.400515653 +0000 UTC m=+1572.146338302" Jan 30 21:40:33 crc kubenswrapper[4751]: I0130 21:40:33.423786 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.423764356 podStartE2EDuration="2.423764356s" podCreationTimestamp="2026-01-30 21:40:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:40:33.409835203 +0000 UTC m=+1572.155657852" watchObservedRunningTime="2026-01-30 21:40:33.423764356 +0000 UTC m=+1572.169587005" Jan 30 21:40:33 crc kubenswrapper[4751]: I0130 21:40:33.439414 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.439393714 podStartE2EDuration="2.439393714s" podCreationTimestamp="2026-01-30 21:40:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:40:33.429108108 +0000 UTC m=+1572.174930757" watchObservedRunningTime="2026-01-30 21:40:33.439393714 +0000 UTC m=+1572.185216363" Jan 30 21:40:34 crc kubenswrapper[4751]: I0130 21:40:34.357295 4751 generic.go:334] "Generic (PLEG): container finished" podID="da95a3dd-69cf-4a27-af6c-1ac5b262c00a" containerID="0309515ef606ff55b4a18e80ad5013912740e27a1539e1146826375f54a2b553" exitCode=0 Jan 30 21:40:34 crc kubenswrapper[4751]: I0130 21:40:34.357431 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dmqw2" event={"ID":"da95a3dd-69cf-4a27-af6c-1ac5b262c00a","Type":"ContainerDied","Data":"0309515ef606ff55b4a18e80ad5013912740e27a1539e1146826375f54a2b553"} Jan 30 21:40:35 crc kubenswrapper[4751]: I0130 21:40:35.369032 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28zk5" event={"ID":"488bc1bc-a729-42ea-8a7c-20ace387607e","Type":"ContainerStarted","Data":"5f9d809f24645e3e4acdbf44e7be5b696ce88635a2da5f9c7567c94c089461e8"} Jan 30 21:40:35 crc kubenswrapper[4751]: I0130 21:40:35.393105 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-28zk5" podStartSLOduration=5.121456157 podStartE2EDuration="8.39308866s" podCreationTimestamp="2026-01-30 21:40:27 +0000 UTC" firstStartedPulling="2026-01-30 21:40:31.311475723 +0000 UTC m=+1570.057298372" lastFinishedPulling="2026-01-30 21:40:34.583108236 +0000 UTC m=+1573.328930875" observedRunningTime="2026-01-30 21:40:35.38637927 +0000 UTC m=+1574.132201909" watchObservedRunningTime="2026-01-30 21:40:35.39308866 +0000 UTC m=+1574.138911299" Jan 30 21:40:35 crc kubenswrapper[4751]: I0130 21:40:35.905297 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:35 crc kubenswrapper[4751]: I0130 21:40:35.925578 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-scripts\") pod \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " Jan 30 21:40:35 crc kubenswrapper[4751]: I0130 21:40:35.925959 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttrf8\" (UniqueName: \"kubernetes.io/projected/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-kube-api-access-ttrf8\") pod \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " Jan 30 21:40:35 crc kubenswrapper[4751]: I0130 21:40:35.925993 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-config-data\") pod \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " Jan 30 21:40:35 crc kubenswrapper[4751]: I0130 21:40:35.926773 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-combined-ca-bundle\") pod \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " Jan 30 21:40:35 crc kubenswrapper[4751]: I0130 21:40:35.933188 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-scripts" (OuterVolumeSpecName: "scripts") pod "da95a3dd-69cf-4a27-af6c-1ac5b262c00a" (UID: "da95a3dd-69cf-4a27-af6c-1ac5b262c00a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:35 crc kubenswrapper[4751]: I0130 21:40:35.933248 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-kube-api-access-ttrf8" (OuterVolumeSpecName: "kube-api-access-ttrf8") pod "da95a3dd-69cf-4a27-af6c-1ac5b262c00a" (UID: "da95a3dd-69cf-4a27-af6c-1ac5b262c00a"). InnerVolumeSpecName "kube-api-access-ttrf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:35 crc kubenswrapper[4751]: I0130 21:40:35.976907 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:40:35 crc kubenswrapper[4751]: E0130 21:40:35.977527 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:40:35 crc kubenswrapper[4751]: I0130 21:40:35.981494 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da95a3dd-69cf-4a27-af6c-1ac5b262c00a" (UID: "da95a3dd-69cf-4a27-af6c-1ac5b262c00a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:35 crc kubenswrapper[4751]: I0130 21:40:35.996516 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-config-data" (OuterVolumeSpecName: "config-data") pod "da95a3dd-69cf-4a27-af6c-1ac5b262c00a" (UID: "da95a3dd-69cf-4a27-af6c-1ac5b262c00a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:36 crc kubenswrapper[4751]: I0130 21:40:36.030131 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:36 crc kubenswrapper[4751]: I0130 21:40:36.030173 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:36 crc kubenswrapper[4751]: I0130 21:40:36.030184 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttrf8\" (UniqueName: \"kubernetes.io/projected/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-kube-api-access-ttrf8\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:36 crc kubenswrapper[4751]: I0130 21:40:36.030194 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:36 crc kubenswrapper[4751]: I0130 21:40:36.380606 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dmqw2" event={"ID":"da95a3dd-69cf-4a27-af6c-1ac5b262c00a","Type":"ContainerDied","Data":"e00a45ef9c5e0a4a179301e799c6bf54f6ff0d30e7efc8df176d7b1e8a8eca4c"} Jan 30 21:40:36 crc kubenswrapper[4751]: I0130 21:40:36.380998 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e00a45ef9c5e0a4a179301e799c6bf54f6ff0d30e7efc8df176d7b1e8a8eca4c" Jan 30 21:40:36 crc kubenswrapper[4751]: I0130 21:40:36.380722 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:36 crc kubenswrapper[4751]: I0130 21:40:36.926752 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 21:40:37 crc kubenswrapper[4751]: I0130 21:40:37.103788 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.213115 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.213183 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.509732 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 30 21:40:38 crc kubenswrapper[4751]: E0130 21:40:38.510298 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da95a3dd-69cf-4a27-af6c-1ac5b262c00a" containerName="aodh-db-sync" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.510317 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="da95a3dd-69cf-4a27-af6c-1ac5b262c00a" containerName="aodh-db-sync" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.510606 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="da95a3dd-69cf-4a27-af6c-1ac5b262c00a" containerName="aodh-db-sync" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.547428 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.547527 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.555368 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-k9tjh" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.555549 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.555661 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.631423 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgzzr\" (UniqueName: \"kubernetes.io/projected/80a202f4-615a-4f93-86ef-46b6a994dd48-kube-api-access-zgzzr\") pod \"aodh-0\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " pod="openstack/aodh-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.631478 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-combined-ca-bundle\") pod \"aodh-0\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " pod="openstack/aodh-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.631505 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-config-data\") pod \"aodh-0\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " pod="openstack/aodh-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.631773 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-scripts\") pod \"aodh-0\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " pod="openstack/aodh-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.733810 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgzzr\" (UniqueName: \"kubernetes.io/projected/80a202f4-615a-4f93-86ef-46b6a994dd48-kube-api-access-zgzzr\") pod \"aodh-0\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " pod="openstack/aodh-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.734068 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-combined-ca-bundle\") pod \"aodh-0\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " pod="openstack/aodh-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.734241 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-config-data\") pod \"aodh-0\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " pod="openstack/aodh-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.734520 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-scripts\") pod \"aodh-0\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " pod="openstack/aodh-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.741773 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-scripts\") pod \"aodh-0\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " pod="openstack/aodh-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.767922 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-config-data\") pod \"aodh-0\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " pod="openstack/aodh-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.771055 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-combined-ca-bundle\") pod \"aodh-0\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " pod="openstack/aodh-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.779021 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgzzr\" (UniqueName: \"kubernetes.io/projected/80a202f4-615a-4f93-86ef-46b6a994dd48-kube-api-access-zgzzr\") pod \"aodh-0\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " pod="openstack/aodh-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.882455 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 30 21:40:39 crc kubenswrapper[4751]: I0130 21:40:39.300442 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-28zk5" podUID="488bc1bc-a729-42ea-8a7c-20ace387607e" containerName="registry-server" probeResult="failure" output=< Jan 30 21:40:39 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:40:39 crc kubenswrapper[4751]: > Jan 30 21:40:39 crc kubenswrapper[4751]: I0130 21:40:39.410953 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 30 21:40:39 crc kubenswrapper[4751]: W0130 21:40:39.416475 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80a202f4_615a_4f93_86ef_46b6a994dd48.slice/crio-5a4c0dc75f2802cc1bc85d8688e41faab929773b0378a29a4bf4c2cf7cf3db55 WatchSource:0}: Error finding container 5a4c0dc75f2802cc1bc85d8688e41faab929773b0378a29a4bf4c2cf7cf3db55: Status 404 returned error can't find the container with id 5a4c0dc75f2802cc1bc85d8688e41faab929773b0378a29a4bf4c2cf7cf3db55 Jan 30 21:40:40 crc kubenswrapper[4751]: I0130 21:40:40.437314 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"80a202f4-615a-4f93-86ef-46b6a994dd48","Type":"ContainerStarted","Data":"ac23d604ebb31b286e737554d5742bb44d19b731f343101921e02b829138a8cd"} Jan 30 21:40:40 crc kubenswrapper[4751]: I0130 21:40:40.437784 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"80a202f4-615a-4f93-86ef-46b6a994dd48","Type":"ContainerStarted","Data":"5a4c0dc75f2802cc1bc85d8688e41faab929773b0378a29a4bf4c2cf7cf3db55"} Jan 30 21:40:41 crc kubenswrapper[4751]: I0130 21:40:41.044312 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:40:41 crc kubenswrapper[4751]: I0130 21:40:41.044846 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="ceilometer-central-agent" containerID="cri-o://6c2f3f9f8e38206f31b75b809fd10381f8bc3e6137676fa3ac5b692f4ab1aec1" gracePeriod=30 Jan 30 21:40:41 crc kubenswrapper[4751]: I0130 21:40:41.044914 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="ceilometer-notification-agent" containerID="cri-o://d7b613c9fb5a4e06aefbc21a52f8dc19a4606e271e4fe01bbcecc755f41f5ef2" gracePeriod=30 Jan 30 21:40:41 crc kubenswrapper[4751]: I0130 21:40:41.045076 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="sg-core" containerID="cri-o://423d5c0e48ebc4ebbb5d2c6df51425116510d1d23af248aa57b6d39b5308dda7" gracePeriod=30 Jan 30 21:40:41 crc kubenswrapper[4751]: I0130 21:40:41.045152 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="proxy-httpd" containerID="cri-o://21af121eb48e81f26f40336062b858e9071037b9c373a72bd1d118b16d6241fb" gracePeriod=30 Jan 30 21:40:41 crc kubenswrapper[4751]: I0130 21:40:41.057844 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.253:3000/\": EOF" Jan 30 21:40:41 crc kubenswrapper[4751]: I0130 21:40:41.451092 4751 generic.go:334] "Generic (PLEG): container finished" podID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerID="423d5c0e48ebc4ebbb5d2c6df51425116510d1d23af248aa57b6d39b5308dda7" exitCode=2 Jan 30 21:40:41 crc kubenswrapper[4751]: I0130 21:40:41.451149 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08","Type":"ContainerDied","Data":"423d5c0e48ebc4ebbb5d2c6df51425116510d1d23af248aa57b6d39b5308dda7"} Jan 30 21:40:41 crc kubenswrapper[4751]: I0130 21:40:41.590450 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 30 21:40:41 crc kubenswrapper[4751]: I0130 21:40:41.926934 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 21:40:41 crc kubenswrapper[4751]: I0130 21:40:41.971116 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 21:40:42 crc kubenswrapper[4751]: I0130 21:40:42.201640 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 21:40:42 crc kubenswrapper[4751]: I0130 21:40:42.201943 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 21:40:42 crc kubenswrapper[4751]: I0130 21:40:42.464388 4751 generic.go:334] "Generic (PLEG): container finished" podID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerID="21af121eb48e81f26f40336062b858e9071037b9c373a72bd1d118b16d6241fb" exitCode=0 Jan 30 21:40:42 crc kubenswrapper[4751]: I0130 21:40:42.464430 4751 generic.go:334] "Generic (PLEG): container finished" podID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerID="6c2f3f9f8e38206f31b75b809fd10381f8bc3e6137676fa3ac5b692f4ab1aec1" exitCode=0 Jan 30 21:40:42 crc kubenswrapper[4751]: I0130 21:40:42.464452 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08","Type":"ContainerDied","Data":"21af121eb48e81f26f40336062b858e9071037b9c373a72bd1d118b16d6241fb"} Jan 30 21:40:42 crc kubenswrapper[4751]: I0130 21:40:42.464495 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08","Type":"ContainerDied","Data":"6c2f3f9f8e38206f31b75b809fd10381f8bc3e6137676fa3ac5b692f4ab1aec1"} Jan 30 21:40:42 crc kubenswrapper[4751]: I0130 21:40:42.549148 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 21:40:43 crc kubenswrapper[4751]: I0130 21:40:43.284108 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e039239f-9678-4ac5-bbd9-31120a7e569a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.2:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 21:40:43 crc kubenswrapper[4751]: I0130 21:40:43.284102 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e039239f-9678-4ac5-bbd9-31120a7e569a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.2:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 21:40:43 crc kubenswrapper[4751]: I0130 21:40:43.478473 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"80a202f4-615a-4f93-86ef-46b6a994dd48","Type":"ContainerStarted","Data":"e7be36c3343336feecf3065582864c9f299a466b29a188fe7804d8ed92ecbf19"} Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.494874 4751 generic.go:334] "Generic (PLEG): container finished" podID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerID="d7b613c9fb5a4e06aefbc21a52f8dc19a4606e271e4fe01bbcecc755f41f5ef2" exitCode=0 Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.495214 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08","Type":"ContainerDied","Data":"d7b613c9fb5a4e06aefbc21a52f8dc19a4606e271e4fe01bbcecc755f41f5ef2"} Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.574199 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.685271 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-log-httpd\") pod \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.685620 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-combined-ca-bundle\") pod \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.685641 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c68jd\" (UniqueName: \"kubernetes.io/projected/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-kube-api-access-c68jd\") pod \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.685668 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-sg-core-conf-yaml\") pod \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.685681 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" (UID: "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.685809 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-scripts\") pod \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.685875 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-run-httpd\") pod \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.685915 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-config-data\") pod \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.686561 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" (UID: "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.686640 4751 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.690685 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-kube-api-access-c68jd" (OuterVolumeSpecName: "kube-api-access-c68jd") pod "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" (UID: "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08"). InnerVolumeSpecName "kube-api-access-c68jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.693892 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-scripts" (OuterVolumeSpecName: "scripts") pod "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" (UID: "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.744276 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" (UID: "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.798089 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c68jd\" (UniqueName: \"kubernetes.io/projected/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-kube-api-access-c68jd\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.798134 4751 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.798145 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.798155 4751 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.821692 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" (UID: "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.839474 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-config-data" (OuterVolumeSpecName: "config-data") pod "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" (UID: "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.902094 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.902146 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.516667 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"80a202f4-615a-4f93-86ef-46b6a994dd48","Type":"ContainerStarted","Data":"f65dee292f2c23dd9aaa3469805cfd92335ffa750e94e1f3024d9787551b98fb"} Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.521153 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08","Type":"ContainerDied","Data":"45f2a90ecd716a5036b578fd96b6b2949721363815ae3cd0cb7a739c6367674a"} Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.521193 4751 scope.go:117] "RemoveContainer" containerID="21af121eb48e81f26f40336062b858e9071037b9c373a72bd1d118b16d6241fb" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.521389 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.589378 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.603148 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.644205 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:40:45 crc kubenswrapper[4751]: E0130 21:40:45.645014 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="proxy-httpd" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.645084 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="proxy-httpd" Jan 30 21:40:45 crc kubenswrapper[4751]: E0130 21:40:45.645155 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="sg-core" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.645211 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="sg-core" Jan 30 21:40:45 crc kubenswrapper[4751]: E0130 21:40:45.645270 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="ceilometer-notification-agent" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.645336 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="ceilometer-notification-agent" Jan 30 21:40:45 crc kubenswrapper[4751]: E0130 21:40:45.645402 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="ceilometer-central-agent" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.645455 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="ceilometer-central-agent" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.646358 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="ceilometer-central-agent" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.646448 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="proxy-httpd" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.646531 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="sg-core" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.646591 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="ceilometer-notification-agent" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.648637 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.652751 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.655247 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.673497 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.719201 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.719285 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-config-data\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.719382 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.719411 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg2hf\" (UniqueName: \"kubernetes.io/projected/62fe5344-a9e0-40c6-9c81-06061248f1f6-kube-api-access-fg2hf\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.719440 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-scripts\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.719470 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62fe5344-a9e0-40c6-9c81-06061248f1f6-log-httpd\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.719522 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62fe5344-a9e0-40c6-9c81-06061248f1f6-run-httpd\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.821434 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-config-data\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.821548 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.821586 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg2hf\" (UniqueName: \"kubernetes.io/projected/62fe5344-a9e0-40c6-9c81-06061248f1f6-kube-api-access-fg2hf\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.821617 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-scripts\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.821650 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62fe5344-a9e0-40c6-9c81-06061248f1f6-log-httpd\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.821672 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62fe5344-a9e0-40c6-9c81-06061248f1f6-run-httpd\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.821755 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.822493 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62fe5344-a9e0-40c6-9c81-06061248f1f6-log-httpd\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.822924 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62fe5344-a9e0-40c6-9c81-06061248f1f6-run-httpd\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.827532 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-config-data\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.828191 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-scripts\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.828745 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.828791 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.840798 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg2hf\" (UniqueName: \"kubernetes.io/projected/62fe5344-a9e0-40c6-9c81-06061248f1f6-kube-api-access-fg2hf\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.858876 4751 scope.go:117] "RemoveContainer" containerID="423d5c0e48ebc4ebbb5d2c6df51425116510d1d23af248aa57b6d39b5308dda7" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.970493 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.992102 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" path="/var/lib/kubelet/pods/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08/volumes" Jan 30 21:40:46 crc kubenswrapper[4751]: I0130 21:40:46.071648 4751 scope.go:117] "RemoveContainer" containerID="d7b613c9fb5a4e06aefbc21a52f8dc19a4606e271e4fe01bbcecc755f41f5ef2" Jan 30 21:40:46 crc kubenswrapper[4751]: I0130 21:40:46.172902 4751 scope.go:117] "RemoveContainer" containerID="6c2f3f9f8e38206f31b75b809fd10381f8bc3e6137676fa3ac5b692f4ab1aec1" Jan 30 21:40:46 crc kubenswrapper[4751]: I0130 21:40:46.539530 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"80a202f4-615a-4f93-86ef-46b6a994dd48","Type":"ContainerStarted","Data":"9bf0c04083e50725f96ae8263bbc6b5600a1a880e73c529e01299c59f1bc0598"} Jan 30 21:40:46 crc kubenswrapper[4751]: I0130 21:40:46.539866 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-api" containerID="cri-o://ac23d604ebb31b286e737554d5742bb44d19b731f343101921e02b829138a8cd" gracePeriod=30 Jan 30 21:40:46 crc kubenswrapper[4751]: I0130 21:40:46.539910 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-listener" containerID="cri-o://9bf0c04083e50725f96ae8263bbc6b5600a1a880e73c529e01299c59f1bc0598" gracePeriod=30 Jan 30 21:40:46 crc kubenswrapper[4751]: I0130 21:40:46.539951 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-notifier" containerID="cri-o://f65dee292f2c23dd9aaa3469805cfd92335ffa750e94e1f3024d9787551b98fb" gracePeriod=30 Jan 30 21:40:46 crc kubenswrapper[4751]: I0130 21:40:46.540000 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-evaluator" containerID="cri-o://e7be36c3343336feecf3065582864c9f299a466b29a188fe7804d8ed92ecbf19" gracePeriod=30 Jan 30 21:40:46 crc kubenswrapper[4751]: I0130 21:40:46.576683 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.10204679 podStartE2EDuration="8.576639178s" podCreationTimestamp="2026-01-30 21:40:38 +0000 UTC" firstStartedPulling="2026-01-30 21:40:39.419309084 +0000 UTC m=+1578.165131733" lastFinishedPulling="2026-01-30 21:40:45.893901462 +0000 UTC m=+1584.639724121" observedRunningTime="2026-01-30 21:40:46.564256186 +0000 UTC m=+1585.310078845" watchObservedRunningTime="2026-01-30 21:40:46.576639178 +0000 UTC m=+1585.322461827" Jan 30 21:40:46 crc kubenswrapper[4751]: I0130 21:40:46.737233 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:40:47 crc kubenswrapper[4751]: I0130 21:40:47.553980 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62fe5344-a9e0-40c6-9c81-06061248f1f6","Type":"ContainerStarted","Data":"2ef5340b3986daee0e9ee3b7fa6ae7d3a92b434f52410ab75ef5c07485682360"} Jan 30 21:40:47 crc kubenswrapper[4751]: I0130 21:40:47.554262 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62fe5344-a9e0-40c6-9c81-06061248f1f6","Type":"ContainerStarted","Data":"952c0e68caba01e8a19179d8cae039bc4ad5143d5f3d94ca42acc30936e372b0"} Jan 30 21:40:47 crc kubenswrapper[4751]: I0130 21:40:47.556549 4751 generic.go:334] "Generic (PLEG): container finished" podID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerID="f65dee292f2c23dd9aaa3469805cfd92335ffa750e94e1f3024d9787551b98fb" exitCode=0 Jan 30 21:40:47 crc kubenswrapper[4751]: I0130 21:40:47.556581 4751 generic.go:334] "Generic (PLEG): container finished" podID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerID="e7be36c3343336feecf3065582864c9f299a466b29a188fe7804d8ed92ecbf19" exitCode=0 Jan 30 21:40:47 crc kubenswrapper[4751]: I0130 21:40:47.556588 4751 generic.go:334] "Generic (PLEG): container finished" podID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerID="ac23d604ebb31b286e737554d5742bb44d19b731f343101921e02b829138a8cd" exitCode=0 Jan 30 21:40:47 crc kubenswrapper[4751]: I0130 21:40:47.556615 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"80a202f4-615a-4f93-86ef-46b6a994dd48","Type":"ContainerDied","Data":"f65dee292f2c23dd9aaa3469805cfd92335ffa750e94e1f3024d9787551b98fb"} Jan 30 21:40:47 crc kubenswrapper[4751]: I0130 21:40:47.556642 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"80a202f4-615a-4f93-86ef-46b6a994dd48","Type":"ContainerDied","Data":"e7be36c3343336feecf3065582864c9f299a466b29a188fe7804d8ed92ecbf19"} Jan 30 21:40:47 crc kubenswrapper[4751]: I0130 21:40:47.556653 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"80a202f4-615a-4f93-86ef-46b6a994dd48","Type":"ContainerDied","Data":"ac23d604ebb31b286e737554d5742bb44d19b731f343101921e02b829138a8cd"} Jan 30 21:40:48 crc kubenswrapper[4751]: I0130 21:40:48.293561 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:48 crc kubenswrapper[4751]: I0130 21:40:48.357996 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:48 crc kubenswrapper[4751]: I0130 21:40:48.542285 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-28zk5"] Jan 30 21:40:48 crc kubenswrapper[4751]: I0130 21:40:48.571386 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62fe5344-a9e0-40c6-9c81-06061248f1f6","Type":"ContainerStarted","Data":"fc7b159909c752eb3954d73a3b4fb088f0cc4914ab1f1a66d75adb6cdd4ca970"} Jan 30 21:40:49 crc kubenswrapper[4751]: I0130 21:40:49.583005 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62fe5344-a9e0-40c6-9c81-06061248f1f6","Type":"ContainerStarted","Data":"fcda05dd91f891b6b10d97096bd01c5909bc42dc90db535c273e9630d9ad1d16"} Jan 30 21:40:49 crc kubenswrapper[4751]: I0130 21:40:49.583193 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-28zk5" podUID="488bc1bc-a729-42ea-8a7c-20ace387607e" containerName="registry-server" containerID="cri-o://5f9d809f24645e3e4acdbf44e7be5b696ce88635a2da5f9c7567c94c089461e8" gracePeriod=2 Jan 30 21:40:49 crc kubenswrapper[4751]: I0130 21:40:49.975747 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:40:49 crc kubenswrapper[4751]: E0130 21:40:49.976400 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.269715 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.355536 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/488bc1bc-a729-42ea-8a7c-20ace387607e-utilities\") pod \"488bc1bc-a729-42ea-8a7c-20ace387607e\" (UID: \"488bc1bc-a729-42ea-8a7c-20ace387607e\") " Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.356104 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpw44\" (UniqueName: \"kubernetes.io/projected/488bc1bc-a729-42ea-8a7c-20ace387607e-kube-api-access-fpw44\") pod \"488bc1bc-a729-42ea-8a7c-20ace387607e\" (UID: \"488bc1bc-a729-42ea-8a7c-20ace387607e\") " Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.356181 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/488bc1bc-a729-42ea-8a7c-20ace387607e-catalog-content\") pod \"488bc1bc-a729-42ea-8a7c-20ace387607e\" (UID: \"488bc1bc-a729-42ea-8a7c-20ace387607e\") " Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.356280 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/488bc1bc-a729-42ea-8a7c-20ace387607e-utilities" (OuterVolumeSpecName: "utilities") pod "488bc1bc-a729-42ea-8a7c-20ace387607e" (UID: "488bc1bc-a729-42ea-8a7c-20ace387607e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.356811 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/488bc1bc-a729-42ea-8a7c-20ace387607e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.365654 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/488bc1bc-a729-42ea-8a7c-20ace387607e-kube-api-access-fpw44" (OuterVolumeSpecName: "kube-api-access-fpw44") pod "488bc1bc-a729-42ea-8a7c-20ace387607e" (UID: "488bc1bc-a729-42ea-8a7c-20ace387607e"). InnerVolumeSpecName "kube-api-access-fpw44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.374119 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/488bc1bc-a729-42ea-8a7c-20ace387607e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "488bc1bc-a729-42ea-8a7c-20ace387607e" (UID: "488bc1bc-a729-42ea-8a7c-20ace387607e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.458679 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpw44\" (UniqueName: \"kubernetes.io/projected/488bc1bc-a729-42ea-8a7c-20ace387607e-kube-api-access-fpw44\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.458709 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/488bc1bc-a729-42ea-8a7c-20ace387607e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.598830 4751 generic.go:334] "Generic (PLEG): container finished" podID="488bc1bc-a729-42ea-8a7c-20ace387607e" containerID="5f9d809f24645e3e4acdbf44e7be5b696ce88635a2da5f9c7567c94c089461e8" exitCode=0 Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.599194 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28zk5" event={"ID":"488bc1bc-a729-42ea-8a7c-20ace387607e","Type":"ContainerDied","Data":"5f9d809f24645e3e4acdbf44e7be5b696ce88635a2da5f9c7567c94c089461e8"} Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.599234 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28zk5" event={"ID":"488bc1bc-a729-42ea-8a7c-20ace387607e","Type":"ContainerDied","Data":"7804348a0fdd5e612f2972159acdb604e779f2dc22bcf28432d660b17f23f501"} Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.599267 4751 scope.go:117] "RemoveContainer" containerID="5f9d809f24645e3e4acdbf44e7be5b696ce88635a2da5f9c7567c94c089461e8" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.599490 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.647587 4751 scope.go:117] "RemoveContainer" containerID="86a72211e7e7fc26353c2a9a47f3913d1f47004de566a99860b994059b56da9b" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.656644 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-28zk5"] Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.676181 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-28zk5"] Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.799882 4751 scope.go:117] "RemoveContainer" containerID="139af0fd70a0e1831f25361c5ad49958ccff917c006c30b5f26d72eac8b63bd9" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.939695 4751 scope.go:117] "RemoveContainer" containerID="5f9d809f24645e3e4acdbf44e7be5b696ce88635a2da5f9c7567c94c089461e8" Jan 30 21:40:50 crc kubenswrapper[4751]: E0130 21:40:50.944593 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f9d809f24645e3e4acdbf44e7be5b696ce88635a2da5f9c7567c94c089461e8\": container with ID starting with 5f9d809f24645e3e4acdbf44e7be5b696ce88635a2da5f9c7567c94c089461e8 not found: ID does not exist" containerID="5f9d809f24645e3e4acdbf44e7be5b696ce88635a2da5f9c7567c94c089461e8" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.944644 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f9d809f24645e3e4acdbf44e7be5b696ce88635a2da5f9c7567c94c089461e8"} err="failed to get container status \"5f9d809f24645e3e4acdbf44e7be5b696ce88635a2da5f9c7567c94c089461e8\": rpc error: code = NotFound desc = could not find container \"5f9d809f24645e3e4acdbf44e7be5b696ce88635a2da5f9c7567c94c089461e8\": container with ID starting with 5f9d809f24645e3e4acdbf44e7be5b696ce88635a2da5f9c7567c94c089461e8 not found: ID does not exist" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.944671 4751 scope.go:117] "RemoveContainer" containerID="86a72211e7e7fc26353c2a9a47f3913d1f47004de566a99860b994059b56da9b" Jan 30 21:40:50 crc kubenswrapper[4751]: E0130 21:40:50.954495 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86a72211e7e7fc26353c2a9a47f3913d1f47004de566a99860b994059b56da9b\": container with ID starting with 86a72211e7e7fc26353c2a9a47f3913d1f47004de566a99860b994059b56da9b not found: ID does not exist" containerID="86a72211e7e7fc26353c2a9a47f3913d1f47004de566a99860b994059b56da9b" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.954544 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86a72211e7e7fc26353c2a9a47f3913d1f47004de566a99860b994059b56da9b"} err="failed to get container status \"86a72211e7e7fc26353c2a9a47f3913d1f47004de566a99860b994059b56da9b\": rpc error: code = NotFound desc = could not find container \"86a72211e7e7fc26353c2a9a47f3913d1f47004de566a99860b994059b56da9b\": container with ID starting with 86a72211e7e7fc26353c2a9a47f3913d1f47004de566a99860b994059b56da9b not found: ID does not exist" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.954571 4751 scope.go:117] "RemoveContainer" containerID="139af0fd70a0e1831f25361c5ad49958ccff917c006c30b5f26d72eac8b63bd9" Jan 30 21:40:50 crc kubenswrapper[4751]: E0130 21:40:50.957986 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"139af0fd70a0e1831f25361c5ad49958ccff917c006c30b5f26d72eac8b63bd9\": container with ID starting with 139af0fd70a0e1831f25361c5ad49958ccff917c006c30b5f26d72eac8b63bd9 not found: ID does not exist" containerID="139af0fd70a0e1831f25361c5ad49958ccff917c006c30b5f26d72eac8b63bd9" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.958026 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"139af0fd70a0e1831f25361c5ad49958ccff917c006c30b5f26d72eac8b63bd9"} err="failed to get container status \"139af0fd70a0e1831f25361c5ad49958ccff917c006c30b5f26d72eac8b63bd9\": rpc error: code = NotFound desc = could not find container \"139af0fd70a0e1831f25361c5ad49958ccff917c006c30b5f26d72eac8b63bd9\": container with ID starting with 139af0fd70a0e1831f25361c5ad49958ccff917c006c30b5f26d72eac8b63bd9 not found: ID does not exist" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.549136 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.575082 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.600046 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b97x5\" (UniqueName: \"kubernetes.io/projected/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-kube-api-access-b97x5\") pod \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\" (UID: \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\") " Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.600437 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-config-data\") pod \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\" (UID: \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\") " Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.600582 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-combined-ca-bundle\") pod \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\" (UID: \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\") " Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.608078 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-kube-api-access-b97x5" (OuterVolumeSpecName: "kube-api-access-b97x5") pod "ce9fb6c9-b64c-4470-9be6-f8686b59b0f8" (UID: "ce9fb6c9-b64c-4470-9be6-f8686b59b0f8"). InnerVolumeSpecName "kube-api-access-b97x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.619271 4751 generic.go:334] "Generic (PLEG): container finished" podID="ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" containerID="104eb5b3de599ec92d6566244c8bd9716b80d0669cf4726ec79bfd7817491fc7" exitCode=137 Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.619381 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5","Type":"ContainerDied","Data":"104eb5b3de599ec92d6566244c8bd9716b80d0669cf4726ec79bfd7817491fc7"} Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.619428 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5","Type":"ContainerDied","Data":"0603105be1749a5d28001ba428223b5f3edc9bed1bd5953cb98748d034ddf6d5"} Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.619450 4751 scope.go:117] "RemoveContainer" containerID="104eb5b3de599ec92d6566244c8bd9716b80d0669cf4726ec79bfd7817491fc7" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.619551 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.635500 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-config-data" (OuterVolumeSpecName: "config-data") pod "ce9fb6c9-b64c-4470-9be6-f8686b59b0f8" (UID: "ce9fb6c9-b64c-4470-9be6-f8686b59b0f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.659253 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce9fb6c9-b64c-4470-9be6-f8686b59b0f8" (UID: "ce9fb6c9-b64c-4470-9be6-f8686b59b0f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.661204 4751 generic.go:334] "Generic (PLEG): container finished" podID="ce9fb6c9-b64c-4470-9be6-f8686b59b0f8" containerID="37d52d20839d3d480c87a07910f0f2bbf2f866d3e6a0dfd6bce0c2617717ca13" exitCode=137 Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.661275 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8","Type":"ContainerDied","Data":"37d52d20839d3d480c87a07910f0f2bbf2f866d3e6a0dfd6bce0c2617717ca13"} Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.661300 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8","Type":"ContainerDied","Data":"84126f69388906672210fb0c0f5b79f09ceedc6fc66204e43ac117768cbfb6e9"} Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.661374 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.668680 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62fe5344-a9e0-40c6-9c81-06061248f1f6","Type":"ContainerStarted","Data":"2694e2dbfd66f237208db55c844cda317cabffa6ea9248c49cdacc371ec12ccc"} Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.669510 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.673020 4751 scope.go:117] "RemoveContainer" containerID="4aefbe2f56c7f9dcf869877e8817164ce4745373ce695b6a57d0467b39716da2" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.706031 4751 scope.go:117] "RemoveContainer" containerID="104eb5b3de599ec92d6566244c8bd9716b80d0669cf4726ec79bfd7817491fc7" Jan 30 21:40:51 crc kubenswrapper[4751]: E0130 21:40:51.707061 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"104eb5b3de599ec92d6566244c8bd9716b80d0669cf4726ec79bfd7817491fc7\": container with ID starting with 104eb5b3de599ec92d6566244c8bd9716b80d0669cf4726ec79bfd7817491fc7 not found: ID does not exist" containerID="104eb5b3de599ec92d6566244c8bd9716b80d0669cf4726ec79bfd7817491fc7" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.707094 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"104eb5b3de599ec92d6566244c8bd9716b80d0669cf4726ec79bfd7817491fc7"} err="failed to get container status \"104eb5b3de599ec92d6566244c8bd9716b80d0669cf4726ec79bfd7817491fc7\": rpc error: code = NotFound desc = could not find container \"104eb5b3de599ec92d6566244c8bd9716b80d0669cf4726ec79bfd7817491fc7\": container with ID starting with 104eb5b3de599ec92d6566244c8bd9716b80d0669cf4726ec79bfd7817491fc7 not found: ID does not exist" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.707116 4751 scope.go:117] "RemoveContainer" containerID="4aefbe2f56c7f9dcf869877e8817164ce4745373ce695b6a57d0467b39716da2" Jan 30 21:40:51 crc kubenswrapper[4751]: E0130 21:40:51.707591 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aefbe2f56c7f9dcf869877e8817164ce4745373ce695b6a57d0467b39716da2\": container with ID starting with 4aefbe2f56c7f9dcf869877e8817164ce4745373ce695b6a57d0467b39716da2 not found: ID does not exist" containerID="4aefbe2f56c7f9dcf869877e8817164ce4745373ce695b6a57d0467b39716da2" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.707643 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aefbe2f56c7f9dcf869877e8817164ce4745373ce695b6a57d0467b39716da2"} err="failed to get container status \"4aefbe2f56c7f9dcf869877e8817164ce4745373ce695b6a57d0467b39716da2\": rpc error: code = NotFound desc = could not find container \"4aefbe2f56c7f9dcf869877e8817164ce4745373ce695b6a57d0467b39716da2\": container with ID starting with 4aefbe2f56c7f9dcf869877e8817164ce4745373ce695b6a57d0467b39716da2 not found: ID does not exist" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.707671 4751 scope.go:117] "RemoveContainer" containerID="37d52d20839d3d480c87a07910f0f2bbf2f866d3e6a0dfd6bce0c2617717ca13" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.708527 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-combined-ca-bundle\") pod \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.708611 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-logs\") pod \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.708944 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf8kb\" (UniqueName: \"kubernetes.io/projected/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-kube-api-access-cf8kb\") pod \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.709004 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-config-data\") pod \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.709604 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.709621 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b97x5\" (UniqueName: \"kubernetes.io/projected/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-kube-api-access-b97x5\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.709632 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.709643 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-logs" (OuterVolumeSpecName: "logs") pod "ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" (UID: "ef384c2e-1483-4ad5-aaa8-96c4a347bcb5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.708500 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.82600271 podStartE2EDuration="6.708482662s" podCreationTimestamp="2026-01-30 21:40:45 +0000 UTC" firstStartedPulling="2026-01-30 21:40:46.733709775 +0000 UTC m=+1585.479532414" lastFinishedPulling="2026-01-30 21:40:50.616189717 +0000 UTC m=+1589.362012366" observedRunningTime="2026-01-30 21:40:51.690751196 +0000 UTC m=+1590.436573855" watchObservedRunningTime="2026-01-30 21:40:51.708482662 +0000 UTC m=+1590.454305321" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.750423 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.758625 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-kube-api-access-cf8kb" (OuterVolumeSpecName: "kube-api-access-cf8kb") pod "ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" (UID: "ef384c2e-1483-4ad5-aaa8-96c4a347bcb5"). InnerVolumeSpecName "kube-api-access-cf8kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.771558 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" (UID: "ef384c2e-1483-4ad5-aaa8-96c4a347bcb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.782870 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.783254 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-config-data" (OuterVolumeSpecName: "config-data") pod "ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" (UID: "ef384c2e-1483-4ad5-aaa8-96c4a347bcb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.811717 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf8kb\" (UniqueName: \"kubernetes.io/projected/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-kube-api-access-cf8kb\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.811747 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.811756 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.811776 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.813846 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 21:40:51 crc kubenswrapper[4751]: E0130 21:40:51.814285 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="488bc1bc-a729-42ea-8a7c-20ace387607e" containerName="extract-content" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.814296 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="488bc1bc-a729-42ea-8a7c-20ace387607e" containerName="extract-content" Jan 30 21:40:51 crc kubenswrapper[4751]: E0130 21:40:51.814343 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="488bc1bc-a729-42ea-8a7c-20ace387607e" containerName="extract-utilities" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.814349 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="488bc1bc-a729-42ea-8a7c-20ace387607e" containerName="extract-utilities" Jan 30 21:40:51 crc kubenswrapper[4751]: E0130 21:40:51.814362 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9fb6c9-b64c-4470-9be6-f8686b59b0f8" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.814376 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9fb6c9-b64c-4470-9be6-f8686b59b0f8" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 21:40:51 crc kubenswrapper[4751]: E0130 21:40:51.814388 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" containerName="nova-metadata-log" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.814394 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" containerName="nova-metadata-log" Jan 30 21:40:51 crc kubenswrapper[4751]: E0130 21:40:51.814424 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="488bc1bc-a729-42ea-8a7c-20ace387607e" containerName="registry-server" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.814431 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="488bc1bc-a729-42ea-8a7c-20ace387607e" containerName="registry-server" Jan 30 21:40:51 crc kubenswrapper[4751]: E0130 21:40:51.814443 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" containerName="nova-metadata-metadata" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.814450 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" containerName="nova-metadata-metadata" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.814685 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" containerName="nova-metadata-log" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.814706 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9fb6c9-b64c-4470-9be6-f8686b59b0f8" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.814724 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="488bc1bc-a729-42ea-8a7c-20ace387607e" containerName="registry-server" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.814737 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" containerName="nova-metadata-metadata" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.815742 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.823524 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.823837 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.823951 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.829434 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.859703 4751 scope.go:117] "RemoveContainer" containerID="37d52d20839d3d480c87a07910f0f2bbf2f866d3e6a0dfd6bce0c2617717ca13" Jan 30 21:40:51 crc kubenswrapper[4751]: E0130 21:40:51.860580 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37d52d20839d3d480c87a07910f0f2bbf2f866d3e6a0dfd6bce0c2617717ca13\": container with ID starting with 37d52d20839d3d480c87a07910f0f2bbf2f866d3e6a0dfd6bce0c2617717ca13 not found: ID does not exist" containerID="37d52d20839d3d480c87a07910f0f2bbf2f866d3e6a0dfd6bce0c2617717ca13" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.860617 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37d52d20839d3d480c87a07910f0f2bbf2f866d3e6a0dfd6bce0c2617717ca13"} err="failed to get container status \"37d52d20839d3d480c87a07910f0f2bbf2f866d3e6a0dfd6bce0c2617717ca13\": rpc error: code = NotFound desc = could not find container \"37d52d20839d3d480c87a07910f0f2bbf2f866d3e6a0dfd6bce0c2617717ca13\": container with ID starting with 37d52d20839d3d480c87a07910f0f2bbf2f866d3e6a0dfd6bce0c2617717ca13 not found: ID does not exist" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.913927 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/150d4911-b366-4c81-b4fa-b5c5e8cadc78-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.913986 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/150d4911-b366-4c81-b4fa-b5c5e8cadc78-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.914080 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8glx\" (UniqueName: \"kubernetes.io/projected/150d4911-b366-4c81-b4fa-b5c5e8cadc78-kube-api-access-x8glx\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.914108 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/150d4911-b366-4c81-b4fa-b5c5e8cadc78-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.914195 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/150d4911-b366-4c81-b4fa-b5c5e8cadc78-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.017520 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/150d4911-b366-4c81-b4fa-b5c5e8cadc78-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.017581 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/150d4911-b366-4c81-b4fa-b5c5e8cadc78-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.018568 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8glx\" (UniqueName: \"kubernetes.io/projected/150d4911-b366-4c81-b4fa-b5c5e8cadc78-kube-api-access-x8glx\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.018616 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/150d4911-b366-4c81-b4fa-b5c5e8cadc78-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.018763 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/150d4911-b366-4c81-b4fa-b5c5e8cadc78-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.026289 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/150d4911-b366-4c81-b4fa-b5c5e8cadc78-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.027008 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="488bc1bc-a729-42ea-8a7c-20ace387607e" path="/var/lib/kubelet/pods/488bc1bc-a729-42ea-8a7c-20ace387607e/volumes" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.027175 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/150d4911-b366-4c81-b4fa-b5c5e8cadc78-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.028788 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce9fb6c9-b64c-4470-9be6-f8686b59b0f8" path="/var/lib/kubelet/pods/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8/volumes" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.029411 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/150d4911-b366-4c81-b4fa-b5c5e8cadc78-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.029450 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.029475 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.033836 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/150d4911-b366-4c81-b4fa-b5c5e8cadc78-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.035656 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8glx\" (UniqueName: \"kubernetes.io/projected/150d4911-b366-4c81-b4fa-b5c5e8cadc78-kube-api-access-x8glx\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.047260 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.049178 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.050678 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.051135 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.068839 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.120954 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.120999 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-config-data\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.121090 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5htcz\" (UniqueName: \"kubernetes.io/projected/df87edd8-7be6-4739-b927-7fd4415a1945-kube-api-access-5htcz\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.121111 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df87edd8-7be6-4739-b927-7fd4415a1945-logs\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.121246 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.160407 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.208343 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.209246 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.212066 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.212401 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.224078 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5htcz\" (UniqueName: \"kubernetes.io/projected/df87edd8-7be6-4739-b927-7fd4415a1945-kube-api-access-5htcz\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.224138 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df87edd8-7be6-4739-b927-7fd4415a1945-logs\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.224185 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.224354 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.224376 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-config-data\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.224772 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df87edd8-7be6-4739-b927-7fd4415a1945-logs\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.230810 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.234573 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.243456 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5htcz\" (UniqueName: \"kubernetes.io/projected/df87edd8-7be6-4739-b927-7fd4415a1945-kube-api-access-5htcz\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.247438 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-config-data\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.444595 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.683154 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.698718 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.707602 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.921381 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-z9wt9"] Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.923572 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.951398 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-z9wt9"] Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.018260 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.054450 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-config\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.054573 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.054616 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.054663 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.054697 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.054726 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvszz\" (UniqueName: \"kubernetes.io/projected/294126cb-98f1-4a1b-84eb-256f24d312ec-kube-api-access-zvszz\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.158128 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.158188 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvszz\" (UniqueName: \"kubernetes.io/projected/294126cb-98f1-4a1b-84eb-256f24d312ec-kube-api-access-zvszz\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.160208 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-config\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.160390 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.160461 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.160529 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.160725 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.162993 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.163303 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-config\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.163866 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.163926 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.183705 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvszz\" (UniqueName: \"kubernetes.io/projected/294126cb-98f1-4a1b-84eb-256f24d312ec-kube-api-access-zvszz\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.325617 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.740512 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"150d4911-b366-4c81-b4fa-b5c5e8cadc78","Type":"ContainerStarted","Data":"14ce6ffec99e0c80e4971861f03d514c9f7e3b18520d2d18102b41cfcbca8741"} Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.741130 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"150d4911-b366-4c81-b4fa-b5c5e8cadc78","Type":"ContainerStarted","Data":"e0362de9a7e0469b252bb7c768e54742f32662d53a68e945693c5681ad433159"} Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.756251 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"df87edd8-7be6-4739-b927-7fd4415a1945","Type":"ContainerStarted","Data":"d26f985b16cf81ac9f0e0967393b88d99beb19dfcd3aaa1006bcfeec0fafcc69"} Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.756287 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"df87edd8-7be6-4739-b927-7fd4415a1945","Type":"ContainerStarted","Data":"204fbf404d1c8ba52093e260239c68e9a0a2f21f814a66b0d5a430c499419aa4"} Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.774704 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.77468543 podStartE2EDuration="2.77468543s" podCreationTimestamp="2026-01-30 21:40:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:40:53.764318092 +0000 UTC m=+1592.510140751" watchObservedRunningTime="2026-01-30 21:40:53.77468543 +0000 UTC m=+1592.520508079" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.926475 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-z9wt9"] Jan 30 21:40:54 crc kubenswrapper[4751]: I0130 21:40:54.000151 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" path="/var/lib/kubelet/pods/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5/volumes" Jan 30 21:40:54 crc kubenswrapper[4751]: I0130 21:40:54.767509 4751 generic.go:334] "Generic (PLEG): container finished" podID="294126cb-98f1-4a1b-84eb-256f24d312ec" containerID="6917c598c5eba82a1b463890dd92dd5f9d24bd22527e450c4ff5bef9192d6678" exitCode=0 Jan 30 21:40:54 crc kubenswrapper[4751]: I0130 21:40:54.767558 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" event={"ID":"294126cb-98f1-4a1b-84eb-256f24d312ec","Type":"ContainerDied","Data":"6917c598c5eba82a1b463890dd92dd5f9d24bd22527e450c4ff5bef9192d6678"} Jan 30 21:40:54 crc kubenswrapper[4751]: I0130 21:40:54.768034 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" event={"ID":"294126cb-98f1-4a1b-84eb-256f24d312ec","Type":"ContainerStarted","Data":"dee8c983aae4ef6924c6d9d77cb8b52f55a1c21e202b60331cafd82b2208a0d0"} Jan 30 21:40:54 crc kubenswrapper[4751]: I0130 21:40:54.771087 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"df87edd8-7be6-4739-b927-7fd4415a1945","Type":"ContainerStarted","Data":"5162b0f2ca635c63cf74041334e9871e0f6a2e8419d5aa845abf4d636f131303"} Jan 30 21:40:54 crc kubenswrapper[4751]: I0130 21:40:54.810913 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.810898403 podStartE2EDuration="3.810898403s" podCreationTimestamp="2026-01-30 21:40:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:40:54.804244245 +0000 UTC m=+1593.550066894" watchObservedRunningTime="2026-01-30 21:40:54.810898403 +0000 UTC m=+1593.556721052" Jan 30 21:40:55 crc kubenswrapper[4751]: I0130 21:40:55.662993 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:55 crc kubenswrapper[4751]: I0130 21:40:55.782191 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e039239f-9678-4ac5-bbd9-31120a7e569a" containerName="nova-api-log" containerID="cri-o://8ad21c2ddc7303115b4968bbab81557f089623a21c2511f765a86b5c7171eebe" gracePeriod=30 Jan 30 21:40:55 crc kubenswrapper[4751]: I0130 21:40:55.783676 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" event={"ID":"294126cb-98f1-4a1b-84eb-256f24d312ec","Type":"ContainerStarted","Data":"594cfabac973f7db3981d05b5c834beff59d6ed05b3a70c8219822cdac4213e7"} Jan 30 21:40:55 crc kubenswrapper[4751]: I0130 21:40:55.783709 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:55 crc kubenswrapper[4751]: I0130 21:40:55.784501 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e039239f-9678-4ac5-bbd9-31120a7e569a" containerName="nova-api-api" containerID="cri-o://512781e56c77850103d27194e2aa5d14e00bd229876195f8a49389ffbb66fbb8" gracePeriod=30 Jan 30 21:40:55 crc kubenswrapper[4751]: I0130 21:40:55.825265 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" podStartSLOduration=3.82524884 podStartE2EDuration="3.82524884s" podCreationTimestamp="2026-01-30 21:40:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:40:55.821872689 +0000 UTC m=+1594.567695348" watchObservedRunningTime="2026-01-30 21:40:55.82524884 +0000 UTC m=+1594.571071489" Jan 30 21:40:56 crc kubenswrapper[4751]: I0130 21:40:56.074225 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:40:56 crc kubenswrapper[4751]: I0130 21:40:56.074660 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="ceilometer-central-agent" containerID="cri-o://2ef5340b3986daee0e9ee3b7fa6ae7d3a92b434f52410ab75ef5c07485682360" gracePeriod=30 Jan 30 21:40:56 crc kubenswrapper[4751]: I0130 21:40:56.075581 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="proxy-httpd" containerID="cri-o://2694e2dbfd66f237208db55c844cda317cabffa6ea9248c49cdacc371ec12ccc" gracePeriod=30 Jan 30 21:40:56 crc kubenswrapper[4751]: I0130 21:40:56.075649 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="sg-core" containerID="cri-o://fcda05dd91f891b6b10d97096bd01c5909bc42dc90db535c273e9630d9ad1d16" gracePeriod=30 Jan 30 21:40:56 crc kubenswrapper[4751]: I0130 21:40:56.075697 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="ceilometer-notification-agent" containerID="cri-o://fc7b159909c752eb3954d73a3b4fb088f0cc4914ab1f1a66d75adb6cdd4ca970" gracePeriod=30 Jan 30 21:40:56 crc kubenswrapper[4751]: I0130 21:40:56.799855 4751 generic.go:334] "Generic (PLEG): container finished" podID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerID="2694e2dbfd66f237208db55c844cda317cabffa6ea9248c49cdacc371ec12ccc" exitCode=0 Jan 30 21:40:56 crc kubenswrapper[4751]: I0130 21:40:56.800206 4751 generic.go:334] "Generic (PLEG): container finished" podID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerID="fcda05dd91f891b6b10d97096bd01c5909bc42dc90db535c273e9630d9ad1d16" exitCode=2 Jan 30 21:40:56 crc kubenswrapper[4751]: I0130 21:40:56.800217 4751 generic.go:334] "Generic (PLEG): container finished" podID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerID="fc7b159909c752eb3954d73a3b4fb088f0cc4914ab1f1a66d75adb6cdd4ca970" exitCode=0 Jan 30 21:40:56 crc kubenswrapper[4751]: I0130 21:40:56.799954 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62fe5344-a9e0-40c6-9c81-06061248f1f6","Type":"ContainerDied","Data":"2694e2dbfd66f237208db55c844cda317cabffa6ea9248c49cdacc371ec12ccc"} Jan 30 21:40:56 crc kubenswrapper[4751]: I0130 21:40:56.800282 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62fe5344-a9e0-40c6-9c81-06061248f1f6","Type":"ContainerDied","Data":"fcda05dd91f891b6b10d97096bd01c5909bc42dc90db535c273e9630d9ad1d16"} Jan 30 21:40:56 crc kubenswrapper[4751]: I0130 21:40:56.800297 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62fe5344-a9e0-40c6-9c81-06061248f1f6","Type":"ContainerDied","Data":"fc7b159909c752eb3954d73a3b4fb088f0cc4914ab1f1a66d75adb6cdd4ca970"} Jan 30 21:40:56 crc kubenswrapper[4751]: I0130 21:40:56.803168 4751 generic.go:334] "Generic (PLEG): container finished" podID="e039239f-9678-4ac5-bbd9-31120a7e569a" containerID="8ad21c2ddc7303115b4968bbab81557f089623a21c2511f765a86b5c7171eebe" exitCode=143 Jan 30 21:40:56 crc kubenswrapper[4751]: I0130 21:40:56.803276 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e039239f-9678-4ac5-bbd9-31120a7e569a","Type":"ContainerDied","Data":"8ad21c2ddc7303115b4968bbab81557f089623a21c2511f765a86b5c7171eebe"} Jan 30 21:40:57 crc kubenswrapper[4751]: I0130 21:40:57.160661 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:57 crc kubenswrapper[4751]: I0130 21:40:57.445422 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 21:40:57 crc kubenswrapper[4751]: I0130 21:40:57.445474 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.544853 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.642393 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e039239f-9678-4ac5-bbd9-31120a7e569a-logs\") pod \"e039239f-9678-4ac5-bbd9-31120a7e569a\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.642549 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndzzh\" (UniqueName: \"kubernetes.io/projected/e039239f-9678-4ac5-bbd9-31120a7e569a-kube-api-access-ndzzh\") pod \"e039239f-9678-4ac5-bbd9-31120a7e569a\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.642756 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e039239f-9678-4ac5-bbd9-31120a7e569a-combined-ca-bundle\") pod \"e039239f-9678-4ac5-bbd9-31120a7e569a\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.642799 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e039239f-9678-4ac5-bbd9-31120a7e569a-config-data\") pod \"e039239f-9678-4ac5-bbd9-31120a7e569a\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.643101 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e039239f-9678-4ac5-bbd9-31120a7e569a-logs" (OuterVolumeSpecName: "logs") pod "e039239f-9678-4ac5-bbd9-31120a7e569a" (UID: "e039239f-9678-4ac5-bbd9-31120a7e569a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.643593 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e039239f-9678-4ac5-bbd9-31120a7e569a-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.651726 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e039239f-9678-4ac5-bbd9-31120a7e569a-kube-api-access-ndzzh" (OuterVolumeSpecName: "kube-api-access-ndzzh") pod "e039239f-9678-4ac5-bbd9-31120a7e569a" (UID: "e039239f-9678-4ac5-bbd9-31120a7e569a"). InnerVolumeSpecName "kube-api-access-ndzzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.703026 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e039239f-9678-4ac5-bbd9-31120a7e569a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e039239f-9678-4ac5-bbd9-31120a7e569a" (UID: "e039239f-9678-4ac5-bbd9-31120a7e569a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.733494 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e039239f-9678-4ac5-bbd9-31120a7e569a-config-data" (OuterVolumeSpecName: "config-data") pod "e039239f-9678-4ac5-bbd9-31120a7e569a" (UID: "e039239f-9678-4ac5-bbd9-31120a7e569a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.745638 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndzzh\" (UniqueName: \"kubernetes.io/projected/e039239f-9678-4ac5-bbd9-31120a7e569a-kube-api-access-ndzzh\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.745677 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e039239f-9678-4ac5-bbd9-31120a7e569a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.745691 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e039239f-9678-4ac5-bbd9-31120a7e569a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.832567 4751 generic.go:334] "Generic (PLEG): container finished" podID="e039239f-9678-4ac5-bbd9-31120a7e569a" containerID="512781e56c77850103d27194e2aa5d14e00bd229876195f8a49389ffbb66fbb8" exitCode=0 Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.832610 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e039239f-9678-4ac5-bbd9-31120a7e569a","Type":"ContainerDied","Data":"512781e56c77850103d27194e2aa5d14e00bd229876195f8a49389ffbb66fbb8"} Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.832636 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e039239f-9678-4ac5-bbd9-31120a7e569a","Type":"ContainerDied","Data":"6f2d3e4731c7bd546c8fba7d19f7cf9cd29af48c0d46d6409c89f580aefa0bd4"} Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.832637 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.832655 4751 scope.go:117] "RemoveContainer" containerID="512781e56c77850103d27194e2aa5d14e00bd229876195f8a49389ffbb66fbb8" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.865959 4751 scope.go:117] "RemoveContainer" containerID="8ad21c2ddc7303115b4968bbab81557f089623a21c2511f765a86b5c7171eebe" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.868545 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.889720 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.902125 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:59 crc kubenswrapper[4751]: E0130 21:40:59.902663 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e039239f-9678-4ac5-bbd9-31120a7e569a" containerName="nova-api-api" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.902685 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e039239f-9678-4ac5-bbd9-31120a7e569a" containerName="nova-api-api" Jan 30 21:40:59 crc kubenswrapper[4751]: E0130 21:40:59.902719 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e039239f-9678-4ac5-bbd9-31120a7e569a" containerName="nova-api-log" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.902728 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e039239f-9678-4ac5-bbd9-31120a7e569a" containerName="nova-api-log" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.903001 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e039239f-9678-4ac5-bbd9-31120a7e569a" containerName="nova-api-log" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.903031 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e039239f-9678-4ac5-bbd9-31120a7e569a" containerName="nova-api-api" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.904951 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.908669 4751 scope.go:117] "RemoveContainer" containerID="512781e56c77850103d27194e2aa5d14e00bd229876195f8a49389ffbb66fbb8" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.908843 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.908918 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 21:40:59 crc kubenswrapper[4751]: E0130 21:40:59.909073 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"512781e56c77850103d27194e2aa5d14e00bd229876195f8a49389ffbb66fbb8\": container with ID starting with 512781e56c77850103d27194e2aa5d14e00bd229876195f8a49389ffbb66fbb8 not found: ID does not exist" containerID="512781e56c77850103d27194e2aa5d14e00bd229876195f8a49389ffbb66fbb8" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.909100 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"512781e56c77850103d27194e2aa5d14e00bd229876195f8a49389ffbb66fbb8"} err="failed to get container status \"512781e56c77850103d27194e2aa5d14e00bd229876195f8a49389ffbb66fbb8\": rpc error: code = NotFound desc = could not find container \"512781e56c77850103d27194e2aa5d14e00bd229876195f8a49389ffbb66fbb8\": container with ID starting with 512781e56c77850103d27194e2aa5d14e00bd229876195f8a49389ffbb66fbb8 not found: ID does not exist" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.909120 4751 scope.go:117] "RemoveContainer" containerID="8ad21c2ddc7303115b4968bbab81557f089623a21c2511f765a86b5c7171eebe" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.909267 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 21:40:59 crc kubenswrapper[4751]: E0130 21:40:59.909420 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ad21c2ddc7303115b4968bbab81557f089623a21c2511f765a86b5c7171eebe\": container with ID starting with 8ad21c2ddc7303115b4968bbab81557f089623a21c2511f765a86b5c7171eebe not found: ID does not exist" containerID="8ad21c2ddc7303115b4968bbab81557f089623a21c2511f765a86b5c7171eebe" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.909459 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ad21c2ddc7303115b4968bbab81557f089623a21c2511f765a86b5c7171eebe"} err="failed to get container status \"8ad21c2ddc7303115b4968bbab81557f089623a21c2511f765a86b5c7171eebe\": rpc error: code = NotFound desc = could not find container \"8ad21c2ddc7303115b4968bbab81557f089623a21c2511f765a86b5c7171eebe\": container with ID starting with 8ad21c2ddc7303115b4968bbab81557f089623a21c2511f765a86b5c7171eebe not found: ID does not exist" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.929763 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.991342 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e039239f-9678-4ac5-bbd9-31120a7e569a" path="/var/lib/kubelet/pods/e039239f-9678-4ac5-bbd9-31120a7e569a/volumes" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.051915 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.052079 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-public-tls-certs\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.052313 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.052375 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3c695f6-b212-4f47-9a88-76996d92772d-logs\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.052450 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldhm5\" (UniqueName: \"kubernetes.io/projected/b3c695f6-b212-4f47-9a88-76996d92772d-kube-api-access-ldhm5\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.052528 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-config-data\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.154684 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldhm5\" (UniqueName: \"kubernetes.io/projected/b3c695f6-b212-4f47-9a88-76996d92772d-kube-api-access-ldhm5\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.154762 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-config-data\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.154829 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.154873 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-public-tls-certs\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.154942 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.154969 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3c695f6-b212-4f47-9a88-76996d92772d-logs\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.155402 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3c695f6-b212-4f47-9a88-76996d92772d-logs\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.162417 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.182138 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.182618 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-config-data\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.182838 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-public-tls-certs\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.200716 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldhm5\" (UniqueName: \"kubernetes.io/projected/b3c695f6-b212-4f47-9a88-76996d92772d-kube-api-access-ldhm5\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.232849 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.823098 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.844126 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3c695f6-b212-4f47-9a88-76996d92772d","Type":"ContainerStarted","Data":"b0dd113a348b24b15d13958c7f00387904d846fc6379d3d8b34c108e756e0709"} Jan 30 21:41:01 crc kubenswrapper[4751]: I0130 21:41:01.858222 4751 generic.go:334] "Generic (PLEG): container finished" podID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerID="2ef5340b3986daee0e9ee3b7fa6ae7d3a92b434f52410ab75ef5c07485682360" exitCode=0 Jan 30 21:41:01 crc kubenswrapper[4751]: I0130 21:41:01.858372 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62fe5344-a9e0-40c6-9c81-06061248f1f6","Type":"ContainerDied","Data":"2ef5340b3986daee0e9ee3b7fa6ae7d3a92b434f52410ab75ef5c07485682360"} Jan 30 21:41:01 crc kubenswrapper[4751]: I0130 21:41:01.860875 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3c695f6-b212-4f47-9a88-76996d92772d","Type":"ContainerStarted","Data":"493a2c726ee0c7312a91490a0bea812358c26401fdc9a767242108a7737a1808"} Jan 30 21:41:01 crc kubenswrapper[4751]: I0130 21:41:01.860901 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3c695f6-b212-4f47-9a88-76996d92772d","Type":"ContainerStarted","Data":"e4635b2c42789dec615eb35af87abe175cead9e8bde37a6b842b0a483841edba"} Jan 30 21:41:01 crc kubenswrapper[4751]: I0130 21:41:01.898608 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.898589261 podStartE2EDuration="2.898589261s" podCreationTimestamp="2026-01-30 21:40:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:01.877034234 +0000 UTC m=+1600.622856873" watchObservedRunningTime="2026-01-30 21:41:01.898589261 +0000 UTC m=+1600.644411910" Jan 30 21:41:01 crc kubenswrapper[4751]: I0130 21:41:01.998395 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.122441 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62fe5344-a9e0-40c6-9c81-06061248f1f6-run-httpd\") pod \"62fe5344-a9e0-40c6-9c81-06061248f1f6\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.122507 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62fe5344-a9e0-40c6-9c81-06061248f1f6-log-httpd\") pod \"62fe5344-a9e0-40c6-9c81-06061248f1f6\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.122579 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-combined-ca-bundle\") pod \"62fe5344-a9e0-40c6-9c81-06061248f1f6\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.122630 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-scripts\") pod \"62fe5344-a9e0-40c6-9c81-06061248f1f6\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.122670 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-config-data\") pod \"62fe5344-a9e0-40c6-9c81-06061248f1f6\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.122717 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg2hf\" (UniqueName: \"kubernetes.io/projected/62fe5344-a9e0-40c6-9c81-06061248f1f6-kube-api-access-fg2hf\") pod \"62fe5344-a9e0-40c6-9c81-06061248f1f6\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.122758 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-sg-core-conf-yaml\") pod \"62fe5344-a9e0-40c6-9c81-06061248f1f6\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.124691 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62fe5344-a9e0-40c6-9c81-06061248f1f6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "62fe5344-a9e0-40c6-9c81-06061248f1f6" (UID: "62fe5344-a9e0-40c6-9c81-06061248f1f6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.124620 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62fe5344-a9e0-40c6-9c81-06061248f1f6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "62fe5344-a9e0-40c6-9c81-06061248f1f6" (UID: "62fe5344-a9e0-40c6-9c81-06061248f1f6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.130953 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-scripts" (OuterVolumeSpecName: "scripts") pod "62fe5344-a9e0-40c6-9c81-06061248f1f6" (UID: "62fe5344-a9e0-40c6-9c81-06061248f1f6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.131294 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62fe5344-a9e0-40c6-9c81-06061248f1f6-kube-api-access-fg2hf" (OuterVolumeSpecName: "kube-api-access-fg2hf") pod "62fe5344-a9e0-40c6-9c81-06061248f1f6" (UID: "62fe5344-a9e0-40c6-9c81-06061248f1f6"). InnerVolumeSpecName "kube-api-access-fg2hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.160876 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.164397 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "62fe5344-a9e0-40c6-9c81-06061248f1f6" (UID: "62fe5344-a9e0-40c6-9c81-06061248f1f6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.180777 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.226301 4751 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62fe5344-a9e0-40c6-9c81-06061248f1f6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.226354 4751 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62fe5344-a9e0-40c6-9c81-06061248f1f6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.226370 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.226381 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg2hf\" (UniqueName: \"kubernetes.io/projected/62fe5344-a9e0-40c6-9c81-06061248f1f6-kube-api-access-fg2hf\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.226393 4751 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.229232 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62fe5344-a9e0-40c6-9c81-06061248f1f6" (UID: "62fe5344-a9e0-40c6-9c81-06061248f1f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.246960 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-config-data" (OuterVolumeSpecName: "config-data") pod "62fe5344-a9e0-40c6-9c81-06061248f1f6" (UID: "62fe5344-a9e0-40c6-9c81-06061248f1f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.331450 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.331490 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.444728 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.444766 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.933075 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62fe5344-a9e0-40c6-9c81-06061248f1f6","Type":"ContainerDied","Data":"952c0e68caba01e8a19179d8cae039bc4ad5143d5f3d94ca42acc30936e372b0"} Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.933152 4751 scope.go:117] "RemoveContainer" containerID="2694e2dbfd66f237208db55c844cda317cabffa6ea9248c49cdacc371ec12ccc" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.933382 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.951846 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.980285 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.014601 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.035290 4751 scope.go:117] "RemoveContainer" containerID="fcda05dd91f891b6b10d97096bd01c5909bc42dc90db535c273e9630d9ad1d16" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.072940 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:41:03 crc kubenswrapper[4751]: E0130 21:41:03.073448 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="sg-core" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.073461 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="sg-core" Jan 30 21:41:03 crc kubenswrapper[4751]: E0130 21:41:03.073474 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="ceilometer-central-agent" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.073482 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="ceilometer-central-agent" Jan 30 21:41:03 crc kubenswrapper[4751]: E0130 21:41:03.073491 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="ceilometer-notification-agent" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.073497 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="ceilometer-notification-agent" Jan 30 21:41:03 crc kubenswrapper[4751]: E0130 21:41:03.073531 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="proxy-httpd" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.073539 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="proxy-httpd" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.073735 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="ceilometer-central-agent" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.073753 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="sg-core" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.073763 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="proxy-httpd" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.073777 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="ceilometer-notification-agent" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.075889 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.083519 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.087711 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.090940 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.158256 4751 scope.go:117] "RemoveContainer" containerID="fc7b159909c752eb3954d73a3b4fb088f0cc4914ab1f1a66d75adb6cdd4ca970" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.164103 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-scripts\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.164138 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.164209 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.164475 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrzcv\" (UniqueName: \"kubernetes.io/projected/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-kube-api-access-jrzcv\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.164496 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-log-httpd\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.168879 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-config-data\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.168936 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-run-httpd\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.233170 4751 scope.go:117] "RemoveContainer" containerID="2ef5340b3986daee0e9ee3b7fa6ae7d3a92b434f52410ab75ef5c07485682360" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.240637 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-cxj8k"] Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.254319 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.261776 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.262092 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.303617 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-scripts\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.303657 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.303767 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.304043 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrzcv\" (UniqueName: \"kubernetes.io/projected/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-kube-api-access-jrzcv\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.304066 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-log-httpd\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.304281 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-config-data\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.304316 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-run-httpd\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.319800 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-cxj8k"] Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.329096 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.330688 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-log-httpd\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.330944 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-run-httpd\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.332277 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.337028 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-scripts\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.345550 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrzcv\" (UniqueName: \"kubernetes.io/projected/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-kube-api-access-jrzcv\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.381968 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.398633 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-config-data\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.473634 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-hpws7"] Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.474151 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" podUID="f5ccd9fd-19b5-4def-9fec-de483cdc8282" containerName="dnsmasq-dns" containerID="cri-o://aa2893876b0b686f16a08289b6eaf353d9eb4d024f387b3760981d23221d7e36" gracePeriod=10 Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.474276 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.485977 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-scripts\") pod \"nova-cell1-cell-mapping-cxj8k\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.486101 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-cxj8k\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.486303 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-config-data\") pod \"nova-cell1-cell-mapping-cxj8k\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.486549 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vhjk\" (UniqueName: \"kubernetes.io/projected/97ec060c-3c30-41e4-946c-7fb4584c7e85-kube-api-access-5vhjk\") pod \"nova-cell1-cell-mapping-cxj8k\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.490269 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="df87edd8-7be6-4739-b927-7fd4415a1945" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.490658 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="df87edd8-7be6-4739-b927-7fd4415a1945" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.596353 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-scripts\") pod \"nova-cell1-cell-mapping-cxj8k\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.596655 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-cxj8k\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.596725 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-config-data\") pod \"nova-cell1-cell-mapping-cxj8k\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.596790 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vhjk\" (UniqueName: \"kubernetes.io/projected/97ec060c-3c30-41e4-946c-7fb4584c7e85-kube-api-access-5vhjk\") pod \"nova-cell1-cell-mapping-cxj8k\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.606719 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-config-data\") pod \"nova-cell1-cell-mapping-cxj8k\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.610232 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-cxj8k\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.624101 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vhjk\" (UniqueName: \"kubernetes.io/projected/97ec060c-3c30-41e4-946c-7fb4584c7e85-kube-api-access-5vhjk\") pod \"nova-cell1-cell-mapping-cxj8k\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.626496 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-scripts\") pod \"nova-cell1-cell-mapping-cxj8k\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.903519 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.949837 4751 generic.go:334] "Generic (PLEG): container finished" podID="f5ccd9fd-19b5-4def-9fec-de483cdc8282" containerID="aa2893876b0b686f16a08289b6eaf353d9eb4d024f387b3760981d23221d7e36" exitCode=0 Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.949926 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" event={"ID":"f5ccd9fd-19b5-4def-9fec-de483cdc8282","Type":"ContainerDied","Data":"aa2893876b0b686f16a08289b6eaf353d9eb4d024f387b3760981d23221d7e36"} Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.013043 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" path="/var/lib/kubelet/pods/62fe5344-a9e0-40c6-9c81-06061248f1f6/volumes" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.152629 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.216207 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-dns-swift-storage-0\") pod \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.216264 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-dns-svc\") pod \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.216315 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-ovsdbserver-nb\") pod \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.216415 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qn7n\" (UniqueName: \"kubernetes.io/projected/f5ccd9fd-19b5-4def-9fec-de483cdc8282-kube-api-access-4qn7n\") pod \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.216487 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-config\") pod \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.216762 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-ovsdbserver-sb\") pod \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.222030 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5ccd9fd-19b5-4def-9fec-de483cdc8282-kube-api-access-4qn7n" (OuterVolumeSpecName: "kube-api-access-4qn7n") pod "f5ccd9fd-19b5-4def-9fec-de483cdc8282" (UID: "f5ccd9fd-19b5-4def-9fec-de483cdc8282"). InnerVolumeSpecName "kube-api-access-4qn7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.308876 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-config" (OuterVolumeSpecName: "config") pod "f5ccd9fd-19b5-4def-9fec-de483cdc8282" (UID: "f5ccd9fd-19b5-4def-9fec-de483cdc8282"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.319165 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qn7n\" (UniqueName: \"kubernetes.io/projected/f5ccd9fd-19b5-4def-9fec-de483cdc8282-kube-api-access-4qn7n\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.319192 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.323232 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f5ccd9fd-19b5-4def-9fec-de483cdc8282" (UID: "f5ccd9fd-19b5-4def-9fec-de483cdc8282"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.338188 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f5ccd9fd-19b5-4def-9fec-de483cdc8282" (UID: "f5ccd9fd-19b5-4def-9fec-de483cdc8282"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.377208 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.396837 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f5ccd9fd-19b5-4def-9fec-de483cdc8282" (UID: "f5ccd9fd-19b5-4def-9fec-de483cdc8282"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.402632 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f5ccd9fd-19b5-4def-9fec-de483cdc8282" (UID: "f5ccd9fd-19b5-4def-9fec-de483cdc8282"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.421123 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.421155 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.421168 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.421176 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.602500 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-cxj8k"] Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.964002 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" event={"ID":"f5ccd9fd-19b5-4def-9fec-de483cdc8282","Type":"ContainerDied","Data":"90582d1ee044bf5a553f8b95b8254b85197e43f8fadf0df2ec897950782d7dc8"} Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.964464 4751 scope.go:117] "RemoveContainer" containerID="aa2893876b0b686f16a08289b6eaf353d9eb4d024f387b3760981d23221d7e36" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.964771 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.976090 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:41:04 crc kubenswrapper[4751]: E0130 21:41:04.976369 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.977334 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4","Type":"ContainerStarted","Data":"b29538a0f0729315334798054b08c861d4a2892710b8f4d3c3dc99157c776bfc"} Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.988305 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-cxj8k" event={"ID":"97ec060c-3c30-41e4-946c-7fb4584c7e85","Type":"ContainerStarted","Data":"d9cc3235ea6a465f2a125270f4c9765fed925e13c4baa2e715494daa6238d33f"} Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.988333 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-cxj8k" event={"ID":"97ec060c-3c30-41e4-946c-7fb4584c7e85","Type":"ContainerStarted","Data":"2b7070a6d1fa5810130342db56a18fb257d8293b60ab7a27b1597f49cdb9136f"} Jan 30 21:41:05 crc kubenswrapper[4751]: I0130 21:41:05.028240 4751 scope.go:117] "RemoveContainer" containerID="2cef368e1d9de3d2fb099a0412649b6c02ad1c0e0295100cf195bfffa3dcf34f" Jan 30 21:41:05 crc kubenswrapper[4751]: I0130 21:41:05.049469 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-hpws7"] Jan 30 21:41:05 crc kubenswrapper[4751]: I0130 21:41:05.063561 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-hpws7"] Jan 30 21:41:05 crc kubenswrapper[4751]: I0130 21:41:05.064751 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-cxj8k" podStartSLOduration=2.06472922 podStartE2EDuration="2.06472922s" podCreationTimestamp="2026-01-30 21:41:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:05.044513038 +0000 UTC m=+1603.790335687" watchObservedRunningTime="2026-01-30 21:41:05.06472922 +0000 UTC m=+1603.810551879" Jan 30 21:41:06 crc kubenswrapper[4751]: I0130 21:41:06.019071 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5ccd9fd-19b5-4def-9fec-de483cdc8282" path="/var/lib/kubelet/pods/f5ccd9fd-19b5-4def-9fec-de483cdc8282/volumes" Jan 30 21:41:06 crc kubenswrapper[4751]: I0130 21:41:06.057671 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4","Type":"ContainerStarted","Data":"9ddca37abc14295092e304e16f057e9292413e612ba26b83ffc562a352e5010d"} Jan 30 21:41:07 crc kubenswrapper[4751]: I0130 21:41:07.072960 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4","Type":"ContainerStarted","Data":"42d96192b9f4e57fad5e092242dbde88d5cbd680cc2072903710de3fa91d35c4"} Jan 30 21:41:07 crc kubenswrapper[4751]: I0130 21:41:07.073298 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4","Type":"ContainerStarted","Data":"7098754f73befc22be64c16c1ee0539387fa4e3d2de6a1eafcbb5688afd2fea0"} Jan 30 21:41:08 crc kubenswrapper[4751]: I0130 21:41:08.908551 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" podUID="f5ccd9fd-19b5-4def-9fec-de483cdc8282" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.251:5353: i/o timeout" Jan 30 21:41:10 crc kubenswrapper[4751]: I0130 21:41:10.113679 4751 generic.go:334] "Generic (PLEG): container finished" podID="97ec060c-3c30-41e4-946c-7fb4584c7e85" containerID="d9cc3235ea6a465f2a125270f4c9765fed925e13c4baa2e715494daa6238d33f" exitCode=0 Jan 30 21:41:10 crc kubenswrapper[4751]: I0130 21:41:10.113812 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-cxj8k" event={"ID":"97ec060c-3c30-41e4-946c-7fb4584c7e85","Type":"ContainerDied","Data":"d9cc3235ea6a465f2a125270f4c9765fed925e13c4baa2e715494daa6238d33f"} Jan 30 21:41:10 crc kubenswrapper[4751]: I0130 21:41:10.117064 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4","Type":"ContainerStarted","Data":"75005102ca65b3f0e9f410be581c7bc514111a4107f51966cdb89335fa00f97c"} Jan 30 21:41:10 crc kubenswrapper[4751]: I0130 21:41:10.117492 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 21:41:10 crc kubenswrapper[4751]: I0130 21:41:10.163107 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.412577037 podStartE2EDuration="8.163079648s" podCreationTimestamp="2026-01-30 21:41:02 +0000 UTC" firstStartedPulling="2026-01-30 21:41:04.376615641 +0000 UTC m=+1603.122438290" lastFinishedPulling="2026-01-30 21:41:09.127118242 +0000 UTC m=+1607.872940901" observedRunningTime="2026-01-30 21:41:10.152814034 +0000 UTC m=+1608.898636703" watchObservedRunningTime="2026-01-30 21:41:10.163079648 +0000 UTC m=+1608.908902347" Jan 30 21:41:10 crc kubenswrapper[4751]: I0130 21:41:10.241833 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 21:41:10 crc kubenswrapper[4751]: I0130 21:41:10.242243 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.268552 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b3c695f6-b212-4f47-9a88-76996d92772d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.8:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.268577 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b3c695f6-b212-4f47-9a88-76996d92772d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.8:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.669204 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.711101 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-config-data\") pod \"97ec060c-3c30-41e4-946c-7fb4584c7e85\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.711183 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-combined-ca-bundle\") pod \"97ec060c-3c30-41e4-946c-7fb4584c7e85\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.711263 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-scripts\") pod \"97ec060c-3c30-41e4-946c-7fb4584c7e85\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.733431 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-scripts" (OuterVolumeSpecName: "scripts") pod "97ec060c-3c30-41e4-946c-7fb4584c7e85" (UID: "97ec060c-3c30-41e4-946c-7fb4584c7e85"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.757959 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-config-data" (OuterVolumeSpecName: "config-data") pod "97ec060c-3c30-41e4-946c-7fb4584c7e85" (UID: "97ec060c-3c30-41e4-946c-7fb4584c7e85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.759477 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97ec060c-3c30-41e4-946c-7fb4584c7e85" (UID: "97ec060c-3c30-41e4-946c-7fb4584c7e85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.822833 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vhjk\" (UniqueName: \"kubernetes.io/projected/97ec060c-3c30-41e4-946c-7fb4584c7e85-kube-api-access-5vhjk\") pod \"97ec060c-3c30-41e4-946c-7fb4584c7e85\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.824444 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.824468 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.824479 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.837432 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97ec060c-3c30-41e4-946c-7fb4584c7e85-kube-api-access-5vhjk" (OuterVolumeSpecName: "kube-api-access-5vhjk") pod "97ec060c-3c30-41e4-946c-7fb4584c7e85" (UID: "97ec060c-3c30-41e4-946c-7fb4584c7e85"). InnerVolumeSpecName "kube-api-access-5vhjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.929316 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vhjk\" (UniqueName: \"kubernetes.io/projected/97ec060c-3c30-41e4-946c-7fb4584c7e85-kube-api-access-5vhjk\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:12 crc kubenswrapper[4751]: I0130 21:41:12.187431 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:12 crc kubenswrapper[4751]: I0130 21:41:12.190712 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-cxj8k" event={"ID":"97ec060c-3c30-41e4-946c-7fb4584c7e85","Type":"ContainerDied","Data":"2b7070a6d1fa5810130342db56a18fb257d8293b60ab7a27b1597f49cdb9136f"} Jan 30 21:41:12 crc kubenswrapper[4751]: I0130 21:41:12.190763 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b7070a6d1fa5810130342db56a18fb257d8293b60ab7a27b1597f49cdb9136f" Jan 30 21:41:12 crc kubenswrapper[4751]: I0130 21:41:12.354471 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:41:12 crc kubenswrapper[4751]: I0130 21:41:12.354742 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b3c695f6-b212-4f47-9a88-76996d92772d" containerName="nova-api-log" containerID="cri-o://e4635b2c42789dec615eb35af87abe175cead9e8bde37a6b842b0a483841edba" gracePeriod=30 Jan 30 21:41:12 crc kubenswrapper[4751]: I0130 21:41:12.354797 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b3c695f6-b212-4f47-9a88-76996d92772d" containerName="nova-api-api" containerID="cri-o://493a2c726ee0c7312a91490a0bea812358c26401fdc9a767242108a7737a1808" gracePeriod=30 Jan 30 21:41:12 crc kubenswrapper[4751]: I0130 21:41:12.375160 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:41:12 crc kubenswrapper[4751]: I0130 21:41:12.375525 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="27bb36a2-bfe5-4dca-a828-ea50cd77e9f3" containerName="nova-scheduler-scheduler" containerID="cri-o://71e518ce4b47249b7fe655e27ae7ba7dfbf14678e631713c379de27311773334" gracePeriod=30 Jan 30 21:41:12 crc kubenswrapper[4751]: I0130 21:41:12.397683 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:41:12 crc kubenswrapper[4751]: I0130 21:41:12.397901 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="df87edd8-7be6-4739-b927-7fd4415a1945" containerName="nova-metadata-log" containerID="cri-o://d26f985b16cf81ac9f0e0967393b88d99beb19dfcd3aaa1006bcfeec0fafcc69" gracePeriod=30 Jan 30 21:41:12 crc kubenswrapper[4751]: I0130 21:41:12.398399 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="df87edd8-7be6-4739-b927-7fd4415a1945" containerName="nova-metadata-metadata" containerID="cri-o://5162b0f2ca635c63cf74041334e9871e0f6a2e8419d5aa845abf4d636f131303" gracePeriod=30 Jan 30 21:41:13 crc kubenswrapper[4751]: I0130 21:41:13.224021 4751 generic.go:334] "Generic (PLEG): container finished" podID="b3c695f6-b212-4f47-9a88-76996d92772d" containerID="e4635b2c42789dec615eb35af87abe175cead9e8bde37a6b842b0a483841edba" exitCode=143 Jan 30 21:41:13 crc kubenswrapper[4751]: I0130 21:41:13.224457 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3c695f6-b212-4f47-9a88-76996d92772d","Type":"ContainerDied","Data":"e4635b2c42789dec615eb35af87abe175cead9e8bde37a6b842b0a483841edba"} Jan 30 21:41:13 crc kubenswrapper[4751]: I0130 21:41:13.230474 4751 generic.go:334] "Generic (PLEG): container finished" podID="df87edd8-7be6-4739-b927-7fd4415a1945" containerID="d26f985b16cf81ac9f0e0967393b88d99beb19dfcd3aaa1006bcfeec0fafcc69" exitCode=143 Jan 30 21:41:13 crc kubenswrapper[4751]: I0130 21:41:13.230516 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"df87edd8-7be6-4739-b927-7fd4415a1945","Type":"ContainerDied","Data":"d26f985b16cf81ac9f0e0967393b88d99beb19dfcd3aaa1006bcfeec0fafcc69"} Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.079111 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.239497 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-combined-ca-bundle\") pod \"df87edd8-7be6-4739-b927-7fd4415a1945\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.240044 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-nova-metadata-tls-certs\") pod \"df87edd8-7be6-4739-b927-7fd4415a1945\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.240109 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df87edd8-7be6-4739-b927-7fd4415a1945-logs\") pod \"df87edd8-7be6-4739-b927-7fd4415a1945\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.240189 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-config-data\") pod \"df87edd8-7be6-4739-b927-7fd4415a1945\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.240407 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5htcz\" (UniqueName: \"kubernetes.io/projected/df87edd8-7be6-4739-b927-7fd4415a1945-kube-api-access-5htcz\") pod \"df87edd8-7be6-4739-b927-7fd4415a1945\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.240722 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df87edd8-7be6-4739-b927-7fd4415a1945-logs" (OuterVolumeSpecName: "logs") pod "df87edd8-7be6-4739-b927-7fd4415a1945" (UID: "df87edd8-7be6-4739-b927-7fd4415a1945"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.240979 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df87edd8-7be6-4739-b927-7fd4415a1945-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.248823 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df87edd8-7be6-4739-b927-7fd4415a1945-kube-api-access-5htcz" (OuterVolumeSpecName: "kube-api-access-5htcz") pod "df87edd8-7be6-4739-b927-7fd4415a1945" (UID: "df87edd8-7be6-4739-b927-7fd4415a1945"). InnerVolumeSpecName "kube-api-access-5htcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.269564 4751 generic.go:334] "Generic (PLEG): container finished" podID="df87edd8-7be6-4739-b927-7fd4415a1945" containerID="5162b0f2ca635c63cf74041334e9871e0f6a2e8419d5aa845abf4d636f131303" exitCode=0 Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.269642 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"df87edd8-7be6-4739-b927-7fd4415a1945","Type":"ContainerDied","Data":"5162b0f2ca635c63cf74041334e9871e0f6a2e8419d5aa845abf4d636f131303"} Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.269672 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"df87edd8-7be6-4739-b927-7fd4415a1945","Type":"ContainerDied","Data":"204fbf404d1c8ba52093e260239c68e9a0a2f21f814a66b0d5a430c499419aa4"} Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.269691 4751 scope.go:117] "RemoveContainer" containerID="5162b0f2ca635c63cf74041334e9871e0f6a2e8419d5aa845abf4d636f131303" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.269839 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.272286 4751 generic.go:334] "Generic (PLEG): container finished" podID="27bb36a2-bfe5-4dca-a828-ea50cd77e9f3" containerID="71e518ce4b47249b7fe655e27ae7ba7dfbf14678e631713c379de27311773334" exitCode=0 Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.272372 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3","Type":"ContainerDied","Data":"71e518ce4b47249b7fe655e27ae7ba7dfbf14678e631713c379de27311773334"} Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.273140 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df87edd8-7be6-4739-b927-7fd4415a1945" (UID: "df87edd8-7be6-4739-b927-7fd4415a1945"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.277217 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-config-data" (OuterVolumeSpecName: "config-data") pod "df87edd8-7be6-4739-b927-7fd4415a1945" (UID: "df87edd8-7be6-4739-b927-7fd4415a1945"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.316661 4751 scope.go:117] "RemoveContainer" containerID="d26f985b16cf81ac9f0e0967393b88d99beb19dfcd3aaa1006bcfeec0fafcc69" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.326588 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "df87edd8-7be6-4739-b927-7fd4415a1945" (UID: "df87edd8-7be6-4739-b927-7fd4415a1945"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.343378 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5htcz\" (UniqueName: \"kubernetes.io/projected/df87edd8-7be6-4739-b927-7fd4415a1945-kube-api-access-5htcz\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.343415 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.343426 4751 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.343438 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.343942 4751 scope.go:117] "RemoveContainer" containerID="5162b0f2ca635c63cf74041334e9871e0f6a2e8419d5aa845abf4d636f131303" Jan 30 21:41:16 crc kubenswrapper[4751]: E0130 21:41:16.344451 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5162b0f2ca635c63cf74041334e9871e0f6a2e8419d5aa845abf4d636f131303\": container with ID starting with 5162b0f2ca635c63cf74041334e9871e0f6a2e8419d5aa845abf4d636f131303 not found: ID does not exist" containerID="5162b0f2ca635c63cf74041334e9871e0f6a2e8419d5aa845abf4d636f131303" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.344477 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5162b0f2ca635c63cf74041334e9871e0f6a2e8419d5aa845abf4d636f131303"} err="failed to get container status \"5162b0f2ca635c63cf74041334e9871e0f6a2e8419d5aa845abf4d636f131303\": rpc error: code = NotFound desc = could not find container \"5162b0f2ca635c63cf74041334e9871e0f6a2e8419d5aa845abf4d636f131303\": container with ID starting with 5162b0f2ca635c63cf74041334e9871e0f6a2e8419d5aa845abf4d636f131303 not found: ID does not exist" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.344498 4751 scope.go:117] "RemoveContainer" containerID="d26f985b16cf81ac9f0e0967393b88d99beb19dfcd3aaa1006bcfeec0fafcc69" Jan 30 21:41:16 crc kubenswrapper[4751]: E0130 21:41:16.344764 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d26f985b16cf81ac9f0e0967393b88d99beb19dfcd3aaa1006bcfeec0fafcc69\": container with ID starting with d26f985b16cf81ac9f0e0967393b88d99beb19dfcd3aaa1006bcfeec0fafcc69 not found: ID does not exist" containerID="d26f985b16cf81ac9f0e0967393b88d99beb19dfcd3aaa1006bcfeec0fafcc69" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.344786 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d26f985b16cf81ac9f0e0967393b88d99beb19dfcd3aaa1006bcfeec0fafcc69"} err="failed to get container status \"d26f985b16cf81ac9f0e0967393b88d99beb19dfcd3aaa1006bcfeec0fafcc69\": rpc error: code = NotFound desc = could not find container \"d26f985b16cf81ac9f0e0967393b88d99beb19dfcd3aaa1006bcfeec0fafcc69\": container with ID starting with d26f985b16cf81ac9f0e0967393b88d99beb19dfcd3aaa1006bcfeec0fafcc69 not found: ID does not exist" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.619427 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.639274 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.666105 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:41:16 crc kubenswrapper[4751]: E0130 21:41:16.666745 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df87edd8-7be6-4739-b927-7fd4415a1945" containerName="nova-metadata-log" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.666767 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="df87edd8-7be6-4739-b927-7fd4415a1945" containerName="nova-metadata-log" Jan 30 21:41:16 crc kubenswrapper[4751]: E0130 21:41:16.666784 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ec060c-3c30-41e4-946c-7fb4584c7e85" containerName="nova-manage" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.666793 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ec060c-3c30-41e4-946c-7fb4584c7e85" containerName="nova-manage" Jan 30 21:41:16 crc kubenswrapper[4751]: E0130 21:41:16.666813 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ccd9fd-19b5-4def-9fec-de483cdc8282" containerName="init" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.666821 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ccd9fd-19b5-4def-9fec-de483cdc8282" containerName="init" Jan 30 21:41:16 crc kubenswrapper[4751]: E0130 21:41:16.666843 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ccd9fd-19b5-4def-9fec-de483cdc8282" containerName="dnsmasq-dns" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.666850 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ccd9fd-19b5-4def-9fec-de483cdc8282" containerName="dnsmasq-dns" Jan 30 21:41:16 crc kubenswrapper[4751]: E0130 21:41:16.666870 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df87edd8-7be6-4739-b927-7fd4415a1945" containerName="nova-metadata-metadata" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.666878 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="df87edd8-7be6-4739-b927-7fd4415a1945" containerName="nova-metadata-metadata" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.667237 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="df87edd8-7be6-4739-b927-7fd4415a1945" containerName="nova-metadata-metadata" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.667262 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5ccd9fd-19b5-4def-9fec-de483cdc8282" containerName="dnsmasq-dns" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.667289 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="df87edd8-7be6-4739-b927-7fd4415a1945" containerName="nova-metadata-log" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.667311 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="97ec060c-3c30-41e4-946c-7fb4584c7e85" containerName="nova-manage" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.668952 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.673093 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.673538 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.679912 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.755144 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/179951f5-39be-43d7-a2fa-3c6f04555760-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.755495 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/179951f5-39be-43d7-a2fa-3c6f04555760-config-data\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.755660 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp2tl\" (UniqueName: \"kubernetes.io/projected/179951f5-39be-43d7-a2fa-3c6f04555760-kube-api-access-jp2tl\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.755778 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/179951f5-39be-43d7-a2fa-3c6f04555760-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.755935 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/179951f5-39be-43d7-a2fa-3c6f04555760-logs\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.857867 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/179951f5-39be-43d7-a2fa-3c6f04555760-config-data\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.857960 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp2tl\" (UniqueName: \"kubernetes.io/projected/179951f5-39be-43d7-a2fa-3c6f04555760-kube-api-access-jp2tl\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.857981 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/179951f5-39be-43d7-a2fa-3c6f04555760-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.858050 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/179951f5-39be-43d7-a2fa-3c6f04555760-logs\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.858103 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/179951f5-39be-43d7-a2fa-3c6f04555760-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.859196 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/179951f5-39be-43d7-a2fa-3c6f04555760-logs\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.863053 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/179951f5-39be-43d7-a2fa-3c6f04555760-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.873826 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/179951f5-39be-43d7-a2fa-3c6f04555760-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.888226 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp2tl\" (UniqueName: \"kubernetes.io/projected/179951f5-39be-43d7-a2fa-3c6f04555760-kube-api-access-jp2tl\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.898072 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/179951f5-39be-43d7-a2fa-3c6f04555760-config-data\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: E0130 21:41:16.929935 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71e518ce4b47249b7fe655e27ae7ba7dfbf14678e631713c379de27311773334 is running failed: container process not found" containerID="71e518ce4b47249b7fe655e27ae7ba7dfbf14678e631713c379de27311773334" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 21:41:16 crc kubenswrapper[4751]: E0130 21:41:16.930163 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71e518ce4b47249b7fe655e27ae7ba7dfbf14678e631713c379de27311773334 is running failed: container process not found" containerID="71e518ce4b47249b7fe655e27ae7ba7dfbf14678e631713c379de27311773334" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 21:41:16 crc kubenswrapper[4751]: E0130 21:41:16.930368 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71e518ce4b47249b7fe655e27ae7ba7dfbf14678e631713c379de27311773334 is running failed: container process not found" containerID="71e518ce4b47249b7fe655e27ae7ba7dfbf14678e631713c379de27311773334" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 21:41:16 crc kubenswrapper[4751]: E0130 21:41:16.930390 4751 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71e518ce4b47249b7fe655e27ae7ba7dfbf14678e631713c379de27311773334 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="27bb36a2-bfe5-4dca-a828-ea50cd77e9f3" containerName="nova-scheduler-scheduler" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.975645 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:41:16 crc kubenswrapper[4751]: E0130 21:41:16.975932 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.989357 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.010059 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.117863 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.163764 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-combined-ca-bundle\") pod \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\" (UID: \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.163877 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5sm5\" (UniqueName: \"kubernetes.io/projected/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-kube-api-access-k5sm5\") pod \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\" (UID: \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.164110 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-config-data\") pod \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\" (UID: \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.171603 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-kube-api-access-k5sm5" (OuterVolumeSpecName: "kube-api-access-k5sm5") pod "27bb36a2-bfe5-4dca-a828-ea50cd77e9f3" (UID: "27bb36a2-bfe5-4dca-a828-ea50cd77e9f3"). InnerVolumeSpecName "kube-api-access-k5sm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:41:17 crc kubenswrapper[4751]: E0130 21:41:17.204002 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-combined-ca-bundle podName:27bb36a2-bfe5-4dca-a828-ea50cd77e9f3 nodeName:}" failed. No retries permitted until 2026-01-30 21:41:17.703971745 +0000 UTC m=+1616.449794394 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-combined-ca-bundle") pod "27bb36a2-bfe5-4dca-a828-ea50cd77e9f3" (UID: "27bb36a2-bfe5-4dca-a828-ea50cd77e9f3") : error deleting /var/lib/kubelet/pods/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3/volume-subpaths: remove /var/lib/kubelet/pods/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3/volume-subpaths: no such file or directory Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.207550 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-config-data" (OuterVolumeSpecName: "config-data") pod "27bb36a2-bfe5-4dca-a828-ea50cd77e9f3" (UID: "27bb36a2-bfe5-4dca-a828-ea50cd77e9f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.266557 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-combined-ca-bundle\") pod \"80a202f4-615a-4f93-86ef-46b6a994dd48\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.266713 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-config-data\") pod \"80a202f4-615a-4f93-86ef-46b6a994dd48\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.266853 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgzzr\" (UniqueName: \"kubernetes.io/projected/80a202f4-615a-4f93-86ef-46b6a994dd48-kube-api-access-zgzzr\") pod \"80a202f4-615a-4f93-86ef-46b6a994dd48\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.266875 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-scripts\") pod \"80a202f4-615a-4f93-86ef-46b6a994dd48\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.267574 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.267593 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5sm5\" (UniqueName: \"kubernetes.io/projected/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-kube-api-access-k5sm5\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.271475 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-scripts" (OuterVolumeSpecName: "scripts") pod "80a202f4-615a-4f93-86ef-46b6a994dd48" (UID: "80a202f4-615a-4f93-86ef-46b6a994dd48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.275788 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80a202f4-615a-4f93-86ef-46b6a994dd48-kube-api-access-zgzzr" (OuterVolumeSpecName: "kube-api-access-zgzzr") pod "80a202f4-615a-4f93-86ef-46b6a994dd48" (UID: "80a202f4-615a-4f93-86ef-46b6a994dd48"). InnerVolumeSpecName "kube-api-access-zgzzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.292125 4751 generic.go:334] "Generic (PLEG): container finished" podID="b3c695f6-b212-4f47-9a88-76996d92772d" containerID="493a2c726ee0c7312a91490a0bea812358c26401fdc9a767242108a7737a1808" exitCode=0 Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.292416 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3c695f6-b212-4f47-9a88-76996d92772d","Type":"ContainerDied","Data":"493a2c726ee0c7312a91490a0bea812358c26401fdc9a767242108a7737a1808"} Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.299484 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3","Type":"ContainerDied","Data":"48f5ec0c53f7e04ad3c659a4b9e04d6883529b029a3420700108431ca3b92a48"} Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.299881 4751 scope.go:117] "RemoveContainer" containerID="71e518ce4b47249b7fe655e27ae7ba7dfbf14678e631713c379de27311773334" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.300069 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.308236 4751 generic.go:334] "Generic (PLEG): container finished" podID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerID="9bf0c04083e50725f96ae8263bbc6b5600a1a880e73c529e01299c59f1bc0598" exitCode=137 Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.308469 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"80a202f4-615a-4f93-86ef-46b6a994dd48","Type":"ContainerDied","Data":"9bf0c04083e50725f96ae8263bbc6b5600a1a880e73c529e01299c59f1bc0598"} Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.308642 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"80a202f4-615a-4f93-86ef-46b6a994dd48","Type":"ContainerDied","Data":"5a4c0dc75f2802cc1bc85d8688e41faab929773b0378a29a4bf4c2cf7cf3db55"} Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.308530 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.331671 4751 scope.go:117] "RemoveContainer" containerID="9bf0c04083e50725f96ae8263bbc6b5600a1a880e73c529e01299c59f1bc0598" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.355756 4751 scope.go:117] "RemoveContainer" containerID="f65dee292f2c23dd9aaa3469805cfd92335ffa750e94e1f3024d9787551b98fb" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.370795 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgzzr\" (UniqueName: \"kubernetes.io/projected/80a202f4-615a-4f93-86ef-46b6a994dd48-kube-api-access-zgzzr\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.371000 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.385914 4751 scope.go:117] "RemoveContainer" containerID="e7be36c3343336feecf3065582864c9f299a466b29a188fe7804d8ed92ecbf19" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.401222 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.430530 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-config-data" (OuterVolumeSpecName: "config-data") pod "80a202f4-615a-4f93-86ef-46b6a994dd48" (UID: "80a202f4-615a-4f93-86ef-46b6a994dd48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.432643 4751 scope.go:117] "RemoveContainer" containerID="ac23d604ebb31b286e737554d5742bb44d19b731f343101921e02b829138a8cd" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.436255 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80a202f4-615a-4f93-86ef-46b6a994dd48" (UID: "80a202f4-615a-4f93-86ef-46b6a994dd48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.469060 4751 scope.go:117] "RemoveContainer" containerID="9bf0c04083e50725f96ae8263bbc6b5600a1a880e73c529e01299c59f1bc0598" Jan 30 21:41:17 crc kubenswrapper[4751]: E0130 21:41:17.469563 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bf0c04083e50725f96ae8263bbc6b5600a1a880e73c529e01299c59f1bc0598\": container with ID starting with 9bf0c04083e50725f96ae8263bbc6b5600a1a880e73c529e01299c59f1bc0598 not found: ID does not exist" containerID="9bf0c04083e50725f96ae8263bbc6b5600a1a880e73c529e01299c59f1bc0598" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.469603 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bf0c04083e50725f96ae8263bbc6b5600a1a880e73c529e01299c59f1bc0598"} err="failed to get container status \"9bf0c04083e50725f96ae8263bbc6b5600a1a880e73c529e01299c59f1bc0598\": rpc error: code = NotFound desc = could not find container \"9bf0c04083e50725f96ae8263bbc6b5600a1a880e73c529e01299c59f1bc0598\": container with ID starting with 9bf0c04083e50725f96ae8263bbc6b5600a1a880e73c529e01299c59f1bc0598 not found: ID does not exist" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.469630 4751 scope.go:117] "RemoveContainer" containerID="f65dee292f2c23dd9aaa3469805cfd92335ffa750e94e1f3024d9787551b98fb" Jan 30 21:41:17 crc kubenswrapper[4751]: E0130 21:41:17.469935 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f65dee292f2c23dd9aaa3469805cfd92335ffa750e94e1f3024d9787551b98fb\": container with ID starting with f65dee292f2c23dd9aaa3469805cfd92335ffa750e94e1f3024d9787551b98fb not found: ID does not exist" containerID="f65dee292f2c23dd9aaa3469805cfd92335ffa750e94e1f3024d9787551b98fb" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.469985 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f65dee292f2c23dd9aaa3469805cfd92335ffa750e94e1f3024d9787551b98fb"} err="failed to get container status \"f65dee292f2c23dd9aaa3469805cfd92335ffa750e94e1f3024d9787551b98fb\": rpc error: code = NotFound desc = could not find container \"f65dee292f2c23dd9aaa3469805cfd92335ffa750e94e1f3024d9787551b98fb\": container with ID starting with f65dee292f2c23dd9aaa3469805cfd92335ffa750e94e1f3024d9787551b98fb not found: ID does not exist" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.470008 4751 scope.go:117] "RemoveContainer" containerID="e7be36c3343336feecf3065582864c9f299a466b29a188fe7804d8ed92ecbf19" Jan 30 21:41:17 crc kubenswrapper[4751]: E0130 21:41:17.470236 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7be36c3343336feecf3065582864c9f299a466b29a188fe7804d8ed92ecbf19\": container with ID starting with e7be36c3343336feecf3065582864c9f299a466b29a188fe7804d8ed92ecbf19 not found: ID does not exist" containerID="e7be36c3343336feecf3065582864c9f299a466b29a188fe7804d8ed92ecbf19" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.470262 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7be36c3343336feecf3065582864c9f299a466b29a188fe7804d8ed92ecbf19"} err="failed to get container status \"e7be36c3343336feecf3065582864c9f299a466b29a188fe7804d8ed92ecbf19\": rpc error: code = NotFound desc = could not find container \"e7be36c3343336feecf3065582864c9f299a466b29a188fe7804d8ed92ecbf19\": container with ID starting with e7be36c3343336feecf3065582864c9f299a466b29a188fe7804d8ed92ecbf19 not found: ID does not exist" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.470282 4751 scope.go:117] "RemoveContainer" containerID="ac23d604ebb31b286e737554d5742bb44d19b731f343101921e02b829138a8cd" Jan 30 21:41:17 crc kubenswrapper[4751]: E0130 21:41:17.470718 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac23d604ebb31b286e737554d5742bb44d19b731f343101921e02b829138a8cd\": container with ID starting with ac23d604ebb31b286e737554d5742bb44d19b731f343101921e02b829138a8cd not found: ID does not exist" containerID="ac23d604ebb31b286e737554d5742bb44d19b731f343101921e02b829138a8cd" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.470741 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac23d604ebb31b286e737554d5742bb44d19b731f343101921e02b829138a8cd"} err="failed to get container status \"ac23d604ebb31b286e737554d5742bb44d19b731f343101921e02b829138a8cd\": rpc error: code = NotFound desc = could not find container \"ac23d604ebb31b286e737554d5742bb44d19b731f343101921e02b829138a8cd\": container with ID starting with ac23d604ebb31b286e737554d5742bb44d19b731f343101921e02b829138a8cd not found: ID does not exist" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.473526 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.473553 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.575006 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3c695f6-b212-4f47-9a88-76996d92772d-logs\") pod \"b3c695f6-b212-4f47-9a88-76996d92772d\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.575173 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-config-data\") pod \"b3c695f6-b212-4f47-9a88-76996d92772d\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.575215 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-internal-tls-certs\") pod \"b3c695f6-b212-4f47-9a88-76996d92772d\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.575260 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-combined-ca-bundle\") pod \"b3c695f6-b212-4f47-9a88-76996d92772d\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.575348 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-public-tls-certs\") pod \"b3c695f6-b212-4f47-9a88-76996d92772d\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.575429 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldhm5\" (UniqueName: \"kubernetes.io/projected/b3c695f6-b212-4f47-9a88-76996d92772d-kube-api-access-ldhm5\") pod \"b3c695f6-b212-4f47-9a88-76996d92772d\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.575505 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3c695f6-b212-4f47-9a88-76996d92772d-logs" (OuterVolumeSpecName: "logs") pod "b3c695f6-b212-4f47-9a88-76996d92772d" (UID: "b3c695f6-b212-4f47-9a88-76996d92772d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.576283 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3c695f6-b212-4f47-9a88-76996d92772d-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.582019 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3c695f6-b212-4f47-9a88-76996d92772d-kube-api-access-ldhm5" (OuterVolumeSpecName: "kube-api-access-ldhm5") pod "b3c695f6-b212-4f47-9a88-76996d92772d" (UID: "b3c695f6-b212-4f47-9a88-76996d92772d"). InnerVolumeSpecName "kube-api-access-ldhm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.595692 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.611277 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3c695f6-b212-4f47-9a88-76996d92772d" (UID: "b3c695f6-b212-4f47-9a88-76996d92772d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.631657 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b3c695f6-b212-4f47-9a88-76996d92772d" (UID: "b3c695f6-b212-4f47-9a88-76996d92772d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.642627 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-config-data" (OuterVolumeSpecName: "config-data") pod "b3c695f6-b212-4f47-9a88-76996d92772d" (UID: "b3c695f6-b212-4f47-9a88-76996d92772d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.669568 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.683029 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.690048 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.690082 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldhm5\" (UniqueName: \"kubernetes.io/projected/b3c695f6-b212-4f47-9a88-76996d92772d-kube-api-access-ldhm5\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.690094 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.690103 4751 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.698155 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 30 21:41:17 crc kubenswrapper[4751]: E0130 21:41:17.705763 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-listener" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.705792 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-listener" Jan 30 21:41:17 crc kubenswrapper[4751]: E0130 21:41:17.705831 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c695f6-b212-4f47-9a88-76996d92772d" containerName="nova-api-api" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.705838 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c695f6-b212-4f47-9a88-76996d92772d" containerName="nova-api-api" Jan 30 21:41:17 crc kubenswrapper[4751]: E0130 21:41:17.705858 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27bb36a2-bfe5-4dca-a828-ea50cd77e9f3" containerName="nova-scheduler-scheduler" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.705864 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="27bb36a2-bfe5-4dca-a828-ea50cd77e9f3" containerName="nova-scheduler-scheduler" Jan 30 21:41:17 crc kubenswrapper[4751]: E0130 21:41:17.735408 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-api" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.735600 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-api" Jan 30 21:41:17 crc kubenswrapper[4751]: E0130 21:41:17.735687 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-notifier" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.735754 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-notifier" Jan 30 21:41:17 crc kubenswrapper[4751]: E0130 21:41:17.735860 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c695f6-b212-4f47-9a88-76996d92772d" containerName="nova-api-log" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.735915 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c695f6-b212-4f47-9a88-76996d92772d" containerName="nova-api-log" Jan 30 21:41:17 crc kubenswrapper[4751]: E0130 21:41:17.735970 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-evaluator" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.736017 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-evaluator" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.736875 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-listener" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.736980 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3c695f6-b212-4f47-9a88-76996d92772d" containerName="nova-api-log" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.737050 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-notifier" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.737123 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-api" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.737187 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3c695f6-b212-4f47-9a88-76996d92772d" containerName="nova-api-api" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.737246 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-evaluator" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.737312 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="27bb36a2-bfe5-4dca-a828-ea50cd77e9f3" containerName="nova-scheduler-scheduler" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.788649 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.791825 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-combined-ca-bundle\") pod \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\" (UID: \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.793719 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cgnk\" (UniqueName: \"kubernetes.io/projected/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-kube-api-access-6cgnk\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.793783 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-public-tls-certs\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.793840 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.793902 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-scripts\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.793965 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-internal-tls-certs\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.793978 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.794011 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-config-data\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.794087 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.794274 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.794487 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-k9tjh" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.794562 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.805611 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27bb36a2-bfe5-4dca-a828-ea50cd77e9f3" (UID: "27bb36a2-bfe5-4dca-a828-ea50cd77e9f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.814927 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.822180 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b3c695f6-b212-4f47-9a88-76996d92772d" (UID: "b3c695f6-b212-4f47-9a88-76996d92772d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.898644 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-internal-tls-certs\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.898714 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-config-data\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.898788 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cgnk\" (UniqueName: \"kubernetes.io/projected/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-kube-api-access-6cgnk\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.898824 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-public-tls-certs\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.898867 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.898916 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-scripts\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.899319 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.902723 4751 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.904170 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.905765 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-public-tls-certs\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.907203 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-scripts\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.908721 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-internal-tls-certs\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.909581 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-config-data\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.933646 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cgnk\" (UniqueName: \"kubernetes.io/projected/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-kube-api-access-6cgnk\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.991559 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" path="/var/lib/kubelet/pods/80a202f4-615a-4f93-86ef-46b6a994dd48/volumes" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.994763 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df87edd8-7be6-4739-b927-7fd4415a1945" path="/var/lib/kubelet/pods/df87edd8-7be6-4739-b927-7fd4415a1945/volumes" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.997601 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.032712 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.052675 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.076079 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.077977 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.080314 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.084513 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.209667 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977b9205-4c23-4ff0-9193-5938e4b87c64-config-data\") pod \"nova-scheduler-0\" (UID: \"977b9205-4c23-4ff0-9193-5938e4b87c64\") " pod="openstack/nova-scheduler-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.209998 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlfc5\" (UniqueName: \"kubernetes.io/projected/977b9205-4c23-4ff0-9193-5938e4b87c64-kube-api-access-qlfc5\") pod \"nova-scheduler-0\" (UID: \"977b9205-4c23-4ff0-9193-5938e4b87c64\") " pod="openstack/nova-scheduler-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.210481 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977b9205-4c23-4ff0-9193-5938e4b87c64-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"977b9205-4c23-4ff0-9193-5938e4b87c64\") " pod="openstack/nova-scheduler-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.312669 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977b9205-4c23-4ff0-9193-5938e4b87c64-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"977b9205-4c23-4ff0-9193-5938e4b87c64\") " pod="openstack/nova-scheduler-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.312765 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977b9205-4c23-4ff0-9193-5938e4b87c64-config-data\") pod \"nova-scheduler-0\" (UID: \"977b9205-4c23-4ff0-9193-5938e4b87c64\") " pod="openstack/nova-scheduler-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.312810 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlfc5\" (UniqueName: \"kubernetes.io/projected/977b9205-4c23-4ff0-9193-5938e4b87c64-kube-api-access-qlfc5\") pod \"nova-scheduler-0\" (UID: \"977b9205-4c23-4ff0-9193-5938e4b87c64\") " pod="openstack/nova-scheduler-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.317068 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977b9205-4c23-4ff0-9193-5938e4b87c64-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"977b9205-4c23-4ff0-9193-5938e4b87c64\") " pod="openstack/nova-scheduler-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.318835 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977b9205-4c23-4ff0-9193-5938e4b87c64-config-data\") pod \"nova-scheduler-0\" (UID: \"977b9205-4c23-4ff0-9193-5938e4b87c64\") " pod="openstack/nova-scheduler-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.353492 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"179951f5-39be-43d7-a2fa-3c6f04555760","Type":"ContainerStarted","Data":"3e95ad146a080153432985e867e8880f69ffc1b6ecc59514cf68490b610fd23b"} Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.353542 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"179951f5-39be-43d7-a2fa-3c6f04555760","Type":"ContainerStarted","Data":"995d4040c7bb283e459a1de6d5d00a384ae11a3aeb69f518aafdb3f6073f40ea"} Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.365126 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.365261 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3c695f6-b212-4f47-9a88-76996d92772d","Type":"ContainerDied","Data":"b0dd113a348b24b15d13958c7f00387904d846fc6379d3d8b34c108e756e0709"} Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.365316 4751 scope.go:117] "RemoveContainer" containerID="493a2c726ee0c7312a91490a0bea812358c26401fdc9a767242108a7737a1808" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.367779 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlfc5\" (UniqueName: \"kubernetes.io/projected/977b9205-4c23-4ff0-9193-5938e4b87c64-kube-api-access-qlfc5\") pod \"nova-scheduler-0\" (UID: \"977b9205-4c23-4ff0-9193-5938e4b87c64\") " pod="openstack/nova-scheduler-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.398259 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.402294 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.417663 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.426863 4751 scope.go:117] "RemoveContainer" containerID="e4635b2c42789dec615eb35af87abe175cead9e8bde37a6b842b0a483841edba" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.442119 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.444111 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.445824 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.447986 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.448947 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.464577 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:41:18 crc kubenswrapper[4751]: W0130 21:41:18.514429 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c6b6e10_77a2_49e7_a4eb_25af482bfab8.slice/crio-7283a35e10e58cb6fd870643eadfa452a5d15d1f89aa7955246ef678a98a324c WatchSource:0}: Error finding container 7283a35e10e58cb6fd870643eadfa452a5d15d1f89aa7955246ef678a98a324c: Status 404 returned error can't find the container with id 7283a35e10e58cb6fd870643eadfa452a5d15d1f89aa7955246ef678a98a324c Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.516042 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.618080 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9j89\" (UniqueName: \"kubernetes.io/projected/e3c7d82f-3209-44cf-a463-9affaab3de75-kube-api-access-t9j89\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.618136 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3c7d82f-3209-44cf-a463-9affaab3de75-config-data\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.618207 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3c7d82f-3209-44cf-a463-9affaab3de75-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.618235 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3c7d82f-3209-44cf-a463-9affaab3de75-logs\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.618297 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3c7d82f-3209-44cf-a463-9affaab3de75-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.618326 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3c7d82f-3209-44cf-a463-9affaab3de75-public-tls-certs\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.719735 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3c7d82f-3209-44cf-a463-9affaab3de75-config-data\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.720060 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3c7d82f-3209-44cf-a463-9affaab3de75-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.720091 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3c7d82f-3209-44cf-a463-9affaab3de75-logs\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.720154 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3c7d82f-3209-44cf-a463-9affaab3de75-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.720182 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3c7d82f-3209-44cf-a463-9affaab3de75-public-tls-certs\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.720250 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9j89\" (UniqueName: \"kubernetes.io/projected/e3c7d82f-3209-44cf-a463-9affaab3de75-kube-api-access-t9j89\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.722084 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3c7d82f-3209-44cf-a463-9affaab3de75-logs\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.726141 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3c7d82f-3209-44cf-a463-9affaab3de75-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.728271 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3c7d82f-3209-44cf-a463-9affaab3de75-public-tls-certs\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.730424 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3c7d82f-3209-44cf-a463-9affaab3de75-config-data\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.733863 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3c7d82f-3209-44cf-a463-9affaab3de75-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.737221 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9j89\" (UniqueName: \"kubernetes.io/projected/e3c7d82f-3209-44cf-a463-9affaab3de75-kube-api-access-t9j89\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.774211 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.974171 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:41:19 crc kubenswrapper[4751]: I0130 21:41:19.273045 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 21:41:19 crc kubenswrapper[4751]: I0130 21:41:19.286282 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:41:19 crc kubenswrapper[4751]: I0130 21:41:19.385542 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e3c7d82f-3209-44cf-a463-9affaab3de75","Type":"ContainerStarted","Data":"e5be611bbcb98e16ae767655b91785e96757242b89cc7b49991fb5b3b7ff221e"} Jan 30 21:41:19 crc kubenswrapper[4751]: I0130 21:41:19.387203 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c6b6e10-77a2-49e7-a4eb-25af482bfab8","Type":"ContainerStarted","Data":"c5cd4a1a71248838ed9e49a494539bb4ff200313cd7269c902fef1ca59c59df9"} Jan 30 21:41:19 crc kubenswrapper[4751]: I0130 21:41:19.387234 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c6b6e10-77a2-49e7-a4eb-25af482bfab8","Type":"ContainerStarted","Data":"7283a35e10e58cb6fd870643eadfa452a5d15d1f89aa7955246ef678a98a324c"} Jan 30 21:41:19 crc kubenswrapper[4751]: I0130 21:41:19.388564 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"977b9205-4c23-4ff0-9193-5938e4b87c64","Type":"ContainerStarted","Data":"592f899a44405cf48606e6fffb3f2c880c353fdad9b14bfb441a0255d24d3ec4"} Jan 30 21:41:19 crc kubenswrapper[4751]: I0130 21:41:19.388590 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"977b9205-4c23-4ff0-9193-5938e4b87c64","Type":"ContainerStarted","Data":"d973385925aaf9eb882a3e1b1f56491729d38a0fbc6fa8e07757410b1d762f10"} Jan 30 21:41:19 crc kubenswrapper[4751]: I0130 21:41:19.396217 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"179951f5-39be-43d7-a2fa-3c6f04555760","Type":"ContainerStarted","Data":"694e8e9d8c7eaed6f2fddf06358e6f5b4230b1c32f08acef6003ccbb683541bc"} Jan 30 21:41:19 crc kubenswrapper[4751]: I0130 21:41:19.405785 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.405764865 podStartE2EDuration="1.405764865s" podCreationTimestamp="2026-01-30 21:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:19.402721943 +0000 UTC m=+1618.148544592" watchObservedRunningTime="2026-01-30 21:41:19.405764865 +0000 UTC m=+1618.151587504" Jan 30 21:41:19 crc kubenswrapper[4751]: I0130 21:41:19.422122 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.422102702 podStartE2EDuration="3.422102702s" podCreationTimestamp="2026-01-30 21:41:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:19.416799931 +0000 UTC m=+1618.162622590" watchObservedRunningTime="2026-01-30 21:41:19.422102702 +0000 UTC m=+1618.167925371" Jan 30 21:41:19 crc kubenswrapper[4751]: I0130 21:41:19.990668 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27bb36a2-bfe5-4dca-a828-ea50cd77e9f3" path="/var/lib/kubelet/pods/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3/volumes" Jan 30 21:41:19 crc kubenswrapper[4751]: I0130 21:41:19.991461 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3c695f6-b212-4f47-9a88-76996d92772d" path="/var/lib/kubelet/pods/b3c695f6-b212-4f47-9a88-76996d92772d/volumes" Jan 30 21:41:20 crc kubenswrapper[4751]: I0130 21:41:20.407722 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e3c7d82f-3209-44cf-a463-9affaab3de75","Type":"ContainerStarted","Data":"f457a67532d4f0db550f8ecb8534d2f14439921dfe2fa3ad9b5dce020449c82e"} Jan 30 21:41:20 crc kubenswrapper[4751]: I0130 21:41:20.408032 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e3c7d82f-3209-44cf-a463-9affaab3de75","Type":"ContainerStarted","Data":"2841c17d7bd5007c17762f8b1fb7f7b2e1a03dc730a5a02ed26681c917d526a5"} Jan 30 21:41:20 crc kubenswrapper[4751]: I0130 21:41:20.412837 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c6b6e10-77a2-49e7-a4eb-25af482bfab8","Type":"ContainerStarted","Data":"a79387816d9bbf1412e00ab664a6adb7838bd04ef1275531e7ddfc43ff77d9fd"} Jan 30 21:41:20 crc kubenswrapper[4751]: I0130 21:41:20.436961 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.436941693 podStartE2EDuration="2.436941693s" podCreationTimestamp="2026-01-30 21:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:20.427027477 +0000 UTC m=+1619.172850126" watchObservedRunningTime="2026-01-30 21:41:20.436941693 +0000 UTC m=+1619.182764342" Jan 30 21:41:21 crc kubenswrapper[4751]: I0130 21:41:21.423099 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c6b6e10-77a2-49e7-a4eb-25af482bfab8","Type":"ContainerStarted","Data":"da16b64f8e713580f787cbd2e4719bba2e477f964b675f8a506b95c51441b6ab"} Jan 30 21:41:22 crc kubenswrapper[4751]: I0130 21:41:22.014672 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 21:41:22 crc kubenswrapper[4751]: I0130 21:41:22.014929 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 21:41:22 crc kubenswrapper[4751]: I0130 21:41:22.434580 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c6b6e10-77a2-49e7-a4eb-25af482bfab8","Type":"ContainerStarted","Data":"41971d5f461004d0df485ffcf9187243aae3bb043417ff8edc3c83dd2dec4187"} Jan 30 21:41:22 crc kubenswrapper[4751]: I0130 21:41:22.471695 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.738581699 podStartE2EDuration="5.471674709s" podCreationTimestamp="2026-01-30 21:41:17 +0000 UTC" firstStartedPulling="2026-01-30 21:41:18.517580407 +0000 UTC m=+1617.263403056" lastFinishedPulling="2026-01-30 21:41:21.250673407 +0000 UTC m=+1619.996496066" observedRunningTime="2026-01-30 21:41:22.45529594 +0000 UTC m=+1621.201118589" watchObservedRunningTime="2026-01-30 21:41:22.471674709 +0000 UTC m=+1621.217497358" Jan 30 21:41:23 crc kubenswrapper[4751]: I0130 21:41:23.403253 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 21:41:27 crc kubenswrapper[4751]: I0130 21:41:27.015382 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 21:41:27 crc kubenswrapper[4751]: I0130 21:41:27.016419 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 21:41:28 crc kubenswrapper[4751]: I0130 21:41:28.031629 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="179951f5-39be-43d7-a2fa-3c6f04555760" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.11:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 21:41:28 crc kubenswrapper[4751]: I0130 21:41:28.031612 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="179951f5-39be-43d7-a2fa-3c6f04555760" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.11:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 21:41:28 crc kubenswrapper[4751]: I0130 21:41:28.403734 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 21:41:28 crc kubenswrapper[4751]: I0130 21:41:28.446469 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 21:41:28 crc kubenswrapper[4751]: I0130 21:41:28.569413 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 21:41:28 crc kubenswrapper[4751]: I0130 21:41:28.774962 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 21:41:28 crc kubenswrapper[4751]: I0130 21:41:28.775655 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 21:41:29 crc kubenswrapper[4751]: I0130 21:41:29.787546 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e3c7d82f-3209-44cf-a463-9affaab3de75" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.14:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 21:41:29 crc kubenswrapper[4751]: I0130 21:41:29.787567 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e3c7d82f-3209-44cf-a463-9affaab3de75" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.14:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 21:41:30 crc kubenswrapper[4751]: I0130 21:41:30.976502 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:41:30 crc kubenswrapper[4751]: E0130 21:41:30.977060 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:41:33 crc kubenswrapper[4751]: I0130 21:41:33.486690 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 21:41:37 crc kubenswrapper[4751]: I0130 21:41:37.019892 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 21:41:37 crc kubenswrapper[4751]: I0130 21:41:37.025723 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 21:41:37 crc kubenswrapper[4751]: I0130 21:41:37.029353 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 21:41:37 crc kubenswrapper[4751]: I0130 21:41:37.644126 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 21:41:38 crc kubenswrapper[4751]: I0130 21:41:38.320319 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 21:41:38 crc kubenswrapper[4751]: I0130 21:41:38.320639 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="67d207d6-2cd8-4679-919b-dedddeebd28d" containerName="kube-state-metrics" containerID="cri-o://c8daadd27b9052e4c383910cfe816522e7df6c5dba304b05d5d1d591c21b393e" gracePeriod=30 Jan 30 21:41:38 crc kubenswrapper[4751]: I0130 21:41:38.434269 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 30 21:41:38 crc kubenswrapper[4751]: I0130 21:41:38.434699 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="0ea4b0a2-4b62-47b1-b925-f78af9c42125" containerName="mysqld-exporter" containerID="cri-o://20dafdd7e367671986fd5ba74e2e896a728dd40248873c547c64ef7943928472" gracePeriod=30 Jan 30 21:41:38 crc kubenswrapper[4751]: I0130 21:41:38.689433 4751 generic.go:334] "Generic (PLEG): container finished" podID="67d207d6-2cd8-4679-919b-dedddeebd28d" containerID="c8daadd27b9052e4c383910cfe816522e7df6c5dba304b05d5d1d591c21b393e" exitCode=2 Jan 30 21:41:38 crc kubenswrapper[4751]: I0130 21:41:38.689516 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"67d207d6-2cd8-4679-919b-dedddeebd28d","Type":"ContainerDied","Data":"c8daadd27b9052e4c383910cfe816522e7df6c5dba304b05d5d1d591c21b393e"} Jan 30 21:41:38 crc kubenswrapper[4751]: I0130 21:41:38.694870 4751 generic.go:334] "Generic (PLEG): container finished" podID="0ea4b0a2-4b62-47b1-b925-f78af9c42125" containerID="20dafdd7e367671986fd5ba74e2e896a728dd40248873c547c64ef7943928472" exitCode=2 Jan 30 21:41:38 crc kubenswrapper[4751]: I0130 21:41:38.694981 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"0ea4b0a2-4b62-47b1-b925-f78af9c42125","Type":"ContainerDied","Data":"20dafdd7e367671986fd5ba74e2e896a728dd40248873c547c64ef7943928472"} Jan 30 21:41:38 crc kubenswrapper[4751]: I0130 21:41:38.788847 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 21:41:38 crc kubenswrapper[4751]: I0130 21:41:38.789850 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 21:41:38 crc kubenswrapper[4751]: I0130 21:41:38.802635 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 21:41:38 crc kubenswrapper[4751]: I0130 21:41:38.804616 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.110777 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.115465 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.238874 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5zjj\" (UniqueName: \"kubernetes.io/projected/67d207d6-2cd8-4679-919b-dedddeebd28d-kube-api-access-g5zjj\") pod \"67d207d6-2cd8-4679-919b-dedddeebd28d\" (UID: \"67d207d6-2cd8-4679-919b-dedddeebd28d\") " Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.238969 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea4b0a2-4b62-47b1-b925-f78af9c42125-combined-ca-bundle\") pod \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\" (UID: \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\") " Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.239107 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwtvg\" (UniqueName: \"kubernetes.io/projected/0ea4b0a2-4b62-47b1-b925-f78af9c42125-kube-api-access-lwtvg\") pod \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\" (UID: \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\") " Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.239252 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea4b0a2-4b62-47b1-b925-f78af9c42125-config-data\") pod \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\" (UID: \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\") " Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.246397 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67d207d6-2cd8-4679-919b-dedddeebd28d-kube-api-access-g5zjj" (OuterVolumeSpecName: "kube-api-access-g5zjj") pod "67d207d6-2cd8-4679-919b-dedddeebd28d" (UID: "67d207d6-2cd8-4679-919b-dedddeebd28d"). InnerVolumeSpecName "kube-api-access-g5zjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.248914 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ea4b0a2-4b62-47b1-b925-f78af9c42125-kube-api-access-lwtvg" (OuterVolumeSpecName: "kube-api-access-lwtvg") pod "0ea4b0a2-4b62-47b1-b925-f78af9c42125" (UID: "0ea4b0a2-4b62-47b1-b925-f78af9c42125"). InnerVolumeSpecName "kube-api-access-lwtvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.280419 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ea4b0a2-4b62-47b1-b925-f78af9c42125-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ea4b0a2-4b62-47b1-b925-f78af9c42125" (UID: "0ea4b0a2-4b62-47b1-b925-f78af9c42125"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.306488 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ea4b0a2-4b62-47b1-b925-f78af9c42125-config-data" (OuterVolumeSpecName: "config-data") pod "0ea4b0a2-4b62-47b1-b925-f78af9c42125" (UID: "0ea4b0a2-4b62-47b1-b925-f78af9c42125"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.342656 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwtvg\" (UniqueName: \"kubernetes.io/projected/0ea4b0a2-4b62-47b1-b925-f78af9c42125-kube-api-access-lwtvg\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.342690 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea4b0a2-4b62-47b1-b925-f78af9c42125-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.342700 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5zjj\" (UniqueName: \"kubernetes.io/projected/67d207d6-2cd8-4679-919b-dedddeebd28d-kube-api-access-g5zjj\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.342708 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea4b0a2-4b62-47b1-b925-f78af9c42125-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.709026 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"0ea4b0a2-4b62-47b1-b925-f78af9c42125","Type":"ContainerDied","Data":"de8e457ae0f4068038c3e6dd30bdd6296bb65bd86e565ea48c1e280f1358b506"} Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.709377 4751 scope.go:117] "RemoveContainer" containerID="20dafdd7e367671986fd5ba74e2e896a728dd40248873c547c64ef7943928472" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.709544 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.714994 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"67d207d6-2cd8-4679-919b-dedddeebd28d","Type":"ContainerDied","Data":"3bb2d0d293bcca63ced4a6eec87e280101ac65a5555311aa13f1e064ca31af8e"} Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.715631 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.716106 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.790991 4751 scope.go:117] "RemoveContainer" containerID="c8daadd27b9052e4c383910cfe816522e7df6c5dba304b05d5d1d591c21b393e" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.821408 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.843534 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.876021 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.903288 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.924808 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 21:41:39 crc kubenswrapper[4751]: E0130 21:41:39.925357 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ea4b0a2-4b62-47b1-b925-f78af9c42125" containerName="mysqld-exporter" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.925378 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ea4b0a2-4b62-47b1-b925-f78af9c42125" containerName="mysqld-exporter" Jan 30 21:41:39 crc kubenswrapper[4751]: E0130 21:41:39.925394 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67d207d6-2cd8-4679-919b-dedddeebd28d" containerName="kube-state-metrics" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.925400 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="67d207d6-2cd8-4679-919b-dedddeebd28d" containerName="kube-state-metrics" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.925639 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ea4b0a2-4b62-47b1-b925-f78af9c42125" containerName="mysqld-exporter" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.925657 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="67d207d6-2cd8-4679-919b-dedddeebd28d" containerName="kube-state-metrics" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.926448 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.929109 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.929339 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.929879 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.953354 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.967995 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.970130 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.979918 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.980224 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.002915 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ea4b0a2-4b62-47b1-b925-f78af9c42125" path="/var/lib/kubelet/pods/0ea4b0a2-4b62-47b1-b925-f78af9c42125/volumes" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.003632 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67d207d6-2cd8-4679-919b-dedddeebd28d" path="/var/lib/kubelet/pods/67d207d6-2cd8-4679-919b-dedddeebd28d/volumes" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.004384 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.059189 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0cb07e-2f77-49e2-931f-c896c3962f9d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"dc0cb07e-2f77-49e2-931f-c896c3962f9d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.059269 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7f85043-bc84-41e2-9f14-a08f96da06f2-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"e7f85043-bc84-41e2-9f14-a08f96da06f2\") " pod="openstack/mysqld-exporter-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.059558 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7f85043-bc84-41e2-9f14-a08f96da06f2-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"e7f85043-bc84-41e2-9f14-a08f96da06f2\") " pod="openstack/mysqld-exporter-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.059741 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbg69\" (UniqueName: \"kubernetes.io/projected/e7f85043-bc84-41e2-9f14-a08f96da06f2-kube-api-access-xbg69\") pod \"mysqld-exporter-0\" (UID: \"e7f85043-bc84-41e2-9f14-a08f96da06f2\") " pod="openstack/mysqld-exporter-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.059854 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7f85043-bc84-41e2-9f14-a08f96da06f2-config-data\") pod \"mysqld-exporter-0\" (UID: \"e7f85043-bc84-41e2-9f14-a08f96da06f2\") " pod="openstack/mysqld-exporter-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.060302 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9b7g\" (UniqueName: \"kubernetes.io/projected/dc0cb07e-2f77-49e2-931f-c896c3962f9d-kube-api-access-m9b7g\") pod \"kube-state-metrics-0\" (UID: \"dc0cb07e-2f77-49e2-931f-c896c3962f9d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.060713 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc0cb07e-2f77-49e2-931f-c896c3962f9d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"dc0cb07e-2f77-49e2-931f-c896c3962f9d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.060866 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/dc0cb07e-2f77-49e2-931f-c896c3962f9d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"dc0cb07e-2f77-49e2-931f-c896c3962f9d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.162999 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7f85043-bc84-41e2-9f14-a08f96da06f2-config-data\") pod \"mysqld-exporter-0\" (UID: \"e7f85043-bc84-41e2-9f14-a08f96da06f2\") " pod="openstack/mysqld-exporter-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.163499 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9b7g\" (UniqueName: \"kubernetes.io/projected/dc0cb07e-2f77-49e2-931f-c896c3962f9d-kube-api-access-m9b7g\") pod \"kube-state-metrics-0\" (UID: \"dc0cb07e-2f77-49e2-931f-c896c3962f9d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.163602 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc0cb07e-2f77-49e2-931f-c896c3962f9d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"dc0cb07e-2f77-49e2-931f-c896c3962f9d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.163627 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/dc0cb07e-2f77-49e2-931f-c896c3962f9d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"dc0cb07e-2f77-49e2-931f-c896c3962f9d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.163693 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0cb07e-2f77-49e2-931f-c896c3962f9d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"dc0cb07e-2f77-49e2-931f-c896c3962f9d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.163751 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7f85043-bc84-41e2-9f14-a08f96da06f2-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"e7f85043-bc84-41e2-9f14-a08f96da06f2\") " pod="openstack/mysqld-exporter-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.163838 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7f85043-bc84-41e2-9f14-a08f96da06f2-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"e7f85043-bc84-41e2-9f14-a08f96da06f2\") " pod="openstack/mysqld-exporter-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.163881 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbg69\" (UniqueName: \"kubernetes.io/projected/e7f85043-bc84-41e2-9f14-a08f96da06f2-kube-api-access-xbg69\") pod \"mysqld-exporter-0\" (UID: \"e7f85043-bc84-41e2-9f14-a08f96da06f2\") " pod="openstack/mysqld-exporter-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.170316 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/dc0cb07e-2f77-49e2-931f-c896c3962f9d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"dc0cb07e-2f77-49e2-931f-c896c3962f9d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.171017 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7f85043-bc84-41e2-9f14-a08f96da06f2-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"e7f85043-bc84-41e2-9f14-a08f96da06f2\") " pod="openstack/mysqld-exporter-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.171648 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7f85043-bc84-41e2-9f14-a08f96da06f2-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"e7f85043-bc84-41e2-9f14-a08f96da06f2\") " pod="openstack/mysqld-exporter-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.172310 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc0cb07e-2f77-49e2-931f-c896c3962f9d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"dc0cb07e-2f77-49e2-931f-c896c3962f9d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.176368 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0cb07e-2f77-49e2-931f-c896c3962f9d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"dc0cb07e-2f77-49e2-931f-c896c3962f9d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.177000 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7f85043-bc84-41e2-9f14-a08f96da06f2-config-data\") pod \"mysqld-exporter-0\" (UID: \"e7f85043-bc84-41e2-9f14-a08f96da06f2\") " pod="openstack/mysqld-exporter-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.180769 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbg69\" (UniqueName: \"kubernetes.io/projected/e7f85043-bc84-41e2-9f14-a08f96da06f2-kube-api-access-xbg69\") pod \"mysqld-exporter-0\" (UID: \"e7f85043-bc84-41e2-9f14-a08f96da06f2\") " pod="openstack/mysqld-exporter-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.184672 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9b7g\" (UniqueName: \"kubernetes.io/projected/dc0cb07e-2f77-49e2-931f-c896c3962f9d-kube-api-access-m9b7g\") pod \"kube-state-metrics-0\" (UID: \"dc0cb07e-2f77-49e2-931f-c896c3962f9d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.254081 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.298099 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.749294 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 21:41:40 crc kubenswrapper[4751]: W0130 21:41:40.865549 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7f85043_bc84_41e2_9f14_a08f96da06f2.slice/crio-759a82268f0d9dfc45ff3777bbe4d84bb6156c47d4c33e30f6d0c095929fdc3b WatchSource:0}: Error finding container 759a82268f0d9dfc45ff3777bbe4d84bb6156c47d4c33e30f6d0c095929fdc3b: Status 404 returned error can't find the container with id 759a82268f0d9dfc45ff3777bbe4d84bb6156c47d4c33e30f6d0c095929fdc3b Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.872711 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.048493 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.048799 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="ceilometer-central-agent" containerID="cri-o://9ddca37abc14295092e304e16f057e9292413e612ba26b83ffc562a352e5010d" gracePeriod=30 Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.048862 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="ceilometer-notification-agent" containerID="cri-o://7098754f73befc22be64c16c1ee0539387fa4e3d2de6a1eafcbb5688afd2fea0" gracePeriod=30 Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.048884 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="sg-core" containerID="cri-o://42d96192b9f4e57fad5e092242dbde88d5cbd680cc2072903710de3fa91d35c4" gracePeriod=30 Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.048821 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="proxy-httpd" containerID="cri-o://75005102ca65b3f0e9f410be581c7bc514111a4107f51966cdb89335fa00f97c" gracePeriod=30 Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.740154 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dc0cb07e-2f77-49e2-931f-c896c3962f9d","Type":"ContainerStarted","Data":"730d589bfd7e40310877bbd01906d1e2f9684c870a2bcc74a9c01fec72ae8cc7"} Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.740656 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.740676 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dc0cb07e-2f77-49e2-931f-c896c3962f9d","Type":"ContainerStarted","Data":"cea33b0f971d5c3b96f9278f0a8f5b377432392741280de21b015aa6529c8508"} Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.743249 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"e7f85043-bc84-41e2-9f14-a08f96da06f2","Type":"ContainerStarted","Data":"e74df03b6505107d773a0997b31506d9bd629597296e805158507eccbfde63b1"} Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.743290 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"e7f85043-bc84-41e2-9f14-a08f96da06f2","Type":"ContainerStarted","Data":"759a82268f0d9dfc45ff3777bbe4d84bb6156c47d4c33e30f6d0c095929fdc3b"} Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.745988 4751 generic.go:334] "Generic (PLEG): container finished" podID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerID="75005102ca65b3f0e9f410be581c7bc514111a4107f51966cdb89335fa00f97c" exitCode=0 Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.746019 4751 generic.go:334] "Generic (PLEG): container finished" podID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerID="42d96192b9f4e57fad5e092242dbde88d5cbd680cc2072903710de3fa91d35c4" exitCode=2 Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.746028 4751 generic.go:334] "Generic (PLEG): container finished" podID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerID="9ddca37abc14295092e304e16f057e9292413e612ba26b83ffc562a352e5010d" exitCode=0 Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.747412 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4","Type":"ContainerDied","Data":"75005102ca65b3f0e9f410be581c7bc514111a4107f51966cdb89335fa00f97c"} Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.747440 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4","Type":"ContainerDied","Data":"42d96192b9f4e57fad5e092242dbde88d5cbd680cc2072903710de3fa91d35c4"} Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.747450 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4","Type":"ContainerDied","Data":"9ddca37abc14295092e304e16f057e9292413e612ba26b83ffc562a352e5010d"} Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.778070 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.411605013 podStartE2EDuration="2.778047917s" podCreationTimestamp="2026-01-30 21:41:39 +0000 UTC" firstStartedPulling="2026-01-30 21:41:40.75101702 +0000 UTC m=+1639.496839669" lastFinishedPulling="2026-01-30 21:41:41.117459914 +0000 UTC m=+1639.863282573" observedRunningTime="2026-01-30 21:41:41.768604713 +0000 UTC m=+1640.514427372" watchObservedRunningTime="2026-01-30 21:41:41.778047917 +0000 UTC m=+1640.523870566" Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.800998 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.257528416 podStartE2EDuration="2.800972891s" podCreationTimestamp="2026-01-30 21:41:39 +0000 UTC" firstStartedPulling="2026-01-30 21:41:40.870320085 +0000 UTC m=+1639.616142744" lastFinishedPulling="2026-01-30 21:41:41.41376457 +0000 UTC m=+1640.159587219" observedRunningTime="2026-01-30 21:41:41.781983942 +0000 UTC m=+1640.527806601" watchObservedRunningTime="2026-01-30 21:41:41.800972891 +0000 UTC m=+1640.546795540" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.596942 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.678881 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-config-data\") pod \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.678930 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-combined-ca-bundle\") pod \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.679041 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-log-httpd\") pod \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.679086 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-scripts\") pod \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.679141 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrzcv\" (UniqueName: \"kubernetes.io/projected/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-kube-api-access-jrzcv\") pod \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.679181 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-run-httpd\") pod \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.679200 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-sg-core-conf-yaml\") pod \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.681276 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" (UID: "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.681981 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" (UID: "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.688000 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-scripts" (OuterVolumeSpecName: "scripts") pod "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" (UID: "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.691829 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-kube-api-access-jrzcv" (OuterVolumeSpecName: "kube-api-access-jrzcv") pod "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" (UID: "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4"). InnerVolumeSpecName "kube-api-access-jrzcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.722802 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" (UID: "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.773568 4751 generic.go:334] "Generic (PLEG): container finished" podID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerID="7098754f73befc22be64c16c1ee0539387fa4e3d2de6a1eafcbb5688afd2fea0" exitCode=0 Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.773614 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4","Type":"ContainerDied","Data":"7098754f73befc22be64c16c1ee0539387fa4e3d2de6a1eafcbb5688afd2fea0"} Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.773644 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4","Type":"ContainerDied","Data":"b29538a0f0729315334798054b08c861d4a2892710b8f4d3c3dc99157c776bfc"} Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.773665 4751 scope.go:117] "RemoveContainer" containerID="75005102ca65b3f0e9f410be581c7bc514111a4107f51966cdb89335fa00f97c" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.773681 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.779409 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" (UID: "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.782349 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.782388 4751 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.782403 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.782417 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrzcv\" (UniqueName: \"kubernetes.io/projected/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-kube-api-access-jrzcv\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.782430 4751 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.782440 4751 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.836850 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-config-data" (OuterVolumeSpecName: "config-data") pod "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" (UID: "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.842200 4751 scope.go:117] "RemoveContainer" containerID="42d96192b9f4e57fad5e092242dbde88d5cbd680cc2072903710de3fa91d35c4" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.871206 4751 scope.go:117] "RemoveContainer" containerID="7098754f73befc22be64c16c1ee0539387fa4e3d2de6a1eafcbb5688afd2fea0" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.884657 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.891396 4751 scope.go:117] "RemoveContainer" containerID="9ddca37abc14295092e304e16f057e9292413e612ba26b83ffc562a352e5010d" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.919993 4751 scope.go:117] "RemoveContainer" containerID="75005102ca65b3f0e9f410be581c7bc514111a4107f51966cdb89335fa00f97c" Jan 30 21:41:43 crc kubenswrapper[4751]: E0130 21:41:43.920544 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75005102ca65b3f0e9f410be581c7bc514111a4107f51966cdb89335fa00f97c\": container with ID starting with 75005102ca65b3f0e9f410be581c7bc514111a4107f51966cdb89335fa00f97c not found: ID does not exist" containerID="75005102ca65b3f0e9f410be581c7bc514111a4107f51966cdb89335fa00f97c" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.920583 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75005102ca65b3f0e9f410be581c7bc514111a4107f51966cdb89335fa00f97c"} err="failed to get container status \"75005102ca65b3f0e9f410be581c7bc514111a4107f51966cdb89335fa00f97c\": rpc error: code = NotFound desc = could not find container \"75005102ca65b3f0e9f410be581c7bc514111a4107f51966cdb89335fa00f97c\": container with ID starting with 75005102ca65b3f0e9f410be581c7bc514111a4107f51966cdb89335fa00f97c not found: ID does not exist" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.920607 4751 scope.go:117] "RemoveContainer" containerID="42d96192b9f4e57fad5e092242dbde88d5cbd680cc2072903710de3fa91d35c4" Jan 30 21:41:43 crc kubenswrapper[4751]: E0130 21:41:43.920882 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42d96192b9f4e57fad5e092242dbde88d5cbd680cc2072903710de3fa91d35c4\": container with ID starting with 42d96192b9f4e57fad5e092242dbde88d5cbd680cc2072903710de3fa91d35c4 not found: ID does not exist" containerID="42d96192b9f4e57fad5e092242dbde88d5cbd680cc2072903710de3fa91d35c4" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.920908 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42d96192b9f4e57fad5e092242dbde88d5cbd680cc2072903710de3fa91d35c4"} err="failed to get container status \"42d96192b9f4e57fad5e092242dbde88d5cbd680cc2072903710de3fa91d35c4\": rpc error: code = NotFound desc = could not find container \"42d96192b9f4e57fad5e092242dbde88d5cbd680cc2072903710de3fa91d35c4\": container with ID starting with 42d96192b9f4e57fad5e092242dbde88d5cbd680cc2072903710de3fa91d35c4 not found: ID does not exist" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.920926 4751 scope.go:117] "RemoveContainer" containerID="7098754f73befc22be64c16c1ee0539387fa4e3d2de6a1eafcbb5688afd2fea0" Jan 30 21:41:43 crc kubenswrapper[4751]: E0130 21:41:43.921211 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7098754f73befc22be64c16c1ee0539387fa4e3d2de6a1eafcbb5688afd2fea0\": container with ID starting with 7098754f73befc22be64c16c1ee0539387fa4e3d2de6a1eafcbb5688afd2fea0 not found: ID does not exist" containerID="7098754f73befc22be64c16c1ee0539387fa4e3d2de6a1eafcbb5688afd2fea0" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.921236 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7098754f73befc22be64c16c1ee0539387fa4e3d2de6a1eafcbb5688afd2fea0"} err="failed to get container status \"7098754f73befc22be64c16c1ee0539387fa4e3d2de6a1eafcbb5688afd2fea0\": rpc error: code = NotFound desc = could not find container \"7098754f73befc22be64c16c1ee0539387fa4e3d2de6a1eafcbb5688afd2fea0\": container with ID starting with 7098754f73befc22be64c16c1ee0539387fa4e3d2de6a1eafcbb5688afd2fea0 not found: ID does not exist" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.921251 4751 scope.go:117] "RemoveContainer" containerID="9ddca37abc14295092e304e16f057e9292413e612ba26b83ffc562a352e5010d" Jan 30 21:41:43 crc kubenswrapper[4751]: E0130 21:41:43.921600 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ddca37abc14295092e304e16f057e9292413e612ba26b83ffc562a352e5010d\": container with ID starting with 9ddca37abc14295092e304e16f057e9292413e612ba26b83ffc562a352e5010d not found: ID does not exist" containerID="9ddca37abc14295092e304e16f057e9292413e612ba26b83ffc562a352e5010d" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.921623 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ddca37abc14295092e304e16f057e9292413e612ba26b83ffc562a352e5010d"} err="failed to get container status \"9ddca37abc14295092e304e16f057e9292413e612ba26b83ffc562a352e5010d\": rpc error: code = NotFound desc = could not find container \"9ddca37abc14295092e304e16f057e9292413e612ba26b83ffc562a352e5010d\": container with ID starting with 9ddca37abc14295092e304e16f057e9292413e612ba26b83ffc562a352e5010d not found: ID does not exist" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.975521 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:41:43 crc kubenswrapper[4751]: E0130 21:41:43.975780 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.166664 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.184493 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.201414 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:41:44 crc kubenswrapper[4751]: E0130 21:41:44.202023 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="sg-core" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.202056 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="sg-core" Jan 30 21:41:44 crc kubenswrapper[4751]: E0130 21:41:44.202091 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="ceilometer-notification-agent" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.202099 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="ceilometer-notification-agent" Jan 30 21:41:44 crc kubenswrapper[4751]: E0130 21:41:44.202130 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="ceilometer-central-agent" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.202138 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="ceilometer-central-agent" Jan 30 21:41:44 crc kubenswrapper[4751]: E0130 21:41:44.202162 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="proxy-httpd" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.202169 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="proxy-httpd" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.202445 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="ceilometer-central-agent" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.202461 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="ceilometer-notification-agent" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.202471 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="proxy-httpd" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.202498 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="sg-core" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.205229 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.211639 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.211956 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.214169 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.217798 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.294740 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.294986 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-scripts\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.295186 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc6664ba-7684-43bb-a51d-9e508b308b3d-log-httpd\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.295440 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc6664ba-7684-43bb-a51d-9e508b308b3d-run-httpd\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.295535 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85tk4\" (UniqueName: \"kubernetes.io/projected/dc6664ba-7684-43bb-a51d-9e508b308b3d-kube-api-access-85tk4\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.295610 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.295645 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-config-data\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.295711 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.397908 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.397973 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-config-data\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.398009 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.399028 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.399116 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-scripts\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.399188 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc6664ba-7684-43bb-a51d-9e508b308b3d-log-httpd\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.399275 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc6664ba-7684-43bb-a51d-9e508b308b3d-run-httpd\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.399366 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85tk4\" (UniqueName: \"kubernetes.io/projected/dc6664ba-7684-43bb-a51d-9e508b308b3d-kube-api-access-85tk4\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.399802 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc6664ba-7684-43bb-a51d-9e508b308b3d-log-httpd\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.400056 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc6664ba-7684-43bb-a51d-9e508b308b3d-run-httpd\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.404065 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-config-data\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.404227 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.405007 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-scripts\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.405587 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.411808 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.417640 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85tk4\" (UniqueName: \"kubernetes.io/projected/dc6664ba-7684-43bb-a51d-9e508b308b3d-kube-api-access-85tk4\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.556411 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:41:45 crc kubenswrapper[4751]: W0130 21:41:45.067565 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc6664ba_7684_43bb_a51d_9e508b308b3d.slice/crio-f0a52aaf12d629a20b40a1b124115c7c8878ba6a296c7b2005aac82e96f89bf6 WatchSource:0}: Error finding container f0a52aaf12d629a20b40a1b124115c7c8878ba6a296c7b2005aac82e96f89bf6: Status 404 returned error can't find the container with id f0a52aaf12d629a20b40a1b124115c7c8878ba6a296c7b2005aac82e96f89bf6 Jan 30 21:41:45 crc kubenswrapper[4751]: I0130 21:41:45.079536 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:41:45 crc kubenswrapper[4751]: I0130 21:41:45.803159 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc6664ba-7684-43bb-a51d-9e508b308b3d","Type":"ContainerStarted","Data":"ade08d2c4232a23eeb9d69dc022cccce47f3c7ec68732285f62203b01157f6a1"} Jan 30 21:41:45 crc kubenswrapper[4751]: I0130 21:41:45.803425 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc6664ba-7684-43bb-a51d-9e508b308b3d","Type":"ContainerStarted","Data":"f0a52aaf12d629a20b40a1b124115c7c8878ba6a296c7b2005aac82e96f89bf6"} Jan 30 21:41:45 crc kubenswrapper[4751]: I0130 21:41:45.994051 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" path="/var/lib/kubelet/pods/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4/volumes" Jan 30 21:41:46 crc kubenswrapper[4751]: I0130 21:41:46.818311 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc6664ba-7684-43bb-a51d-9e508b308b3d","Type":"ContainerStarted","Data":"a0f726c7cd142e46e1744145e306459683e54f9dc50a1f555f6aa463adc70875"} Jan 30 21:41:47 crc kubenswrapper[4751]: I0130 21:41:47.833259 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc6664ba-7684-43bb-a51d-9e508b308b3d","Type":"ContainerStarted","Data":"0e998353f70d72cf83742d5c19ae2a01ada43522339ce1c3fa0dee4d1c39d534"} Jan 30 21:41:49 crc kubenswrapper[4751]: I0130 21:41:49.854118 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc6664ba-7684-43bb-a51d-9e508b308b3d","Type":"ContainerStarted","Data":"4c807eb39138fcbab426b7e6182ee8892e0e25c2dc8b96a85aed38bf80f64725"} Jan 30 21:41:49 crc kubenswrapper[4751]: I0130 21:41:49.855176 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 21:41:49 crc kubenswrapper[4751]: I0130 21:41:49.884415 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.592528319 podStartE2EDuration="5.884393058s" podCreationTimestamp="2026-01-30 21:41:44 +0000 UTC" firstStartedPulling="2026-01-30 21:41:45.070175589 +0000 UTC m=+1643.815998248" lastFinishedPulling="2026-01-30 21:41:49.362040308 +0000 UTC m=+1648.107862987" observedRunningTime="2026-01-30 21:41:49.875188172 +0000 UTC m=+1648.621010821" watchObservedRunningTime="2026-01-30 21:41:49.884393058 +0000 UTC m=+1648.630215727" Jan 30 21:41:50 crc kubenswrapper[4751]: I0130 21:41:50.268066 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 21:41:54 crc kubenswrapper[4751]: I0130 21:41:54.976867 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:41:54 crc kubenswrapper[4751]: E0130 21:41:54.977825 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:42:09 crc kubenswrapper[4751]: I0130 21:42:09.975758 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:42:09 crc kubenswrapper[4751]: E0130 21:42:09.976614 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:42:14 crc kubenswrapper[4751]: I0130 21:42:14.575566 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 21:42:20 crc kubenswrapper[4751]: I0130 21:42:20.976643 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:42:20 crc kubenswrapper[4751]: E0130 21:42:20.979164 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:42:24 crc kubenswrapper[4751]: I0130 21:42:24.978588 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-npwgd"] Jan 30 21:42:24 crc kubenswrapper[4751]: I0130 21:42:24.989399 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-npwgd"] Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.077014 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-xw5xf"] Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.078546 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-xw5xf" Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.116534 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-xw5xf"] Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.239496 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfv8d\" (UniqueName: \"kubernetes.io/projected/27b928b3-101e-4649-ae57-9857145062f0-kube-api-access-kfv8d\") pod \"heat-db-sync-xw5xf\" (UID: \"27b928b3-101e-4649-ae57-9857145062f0\") " pod="openstack/heat-db-sync-xw5xf" Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.239695 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b928b3-101e-4649-ae57-9857145062f0-config-data\") pod \"heat-db-sync-xw5xf\" (UID: \"27b928b3-101e-4649-ae57-9857145062f0\") " pod="openstack/heat-db-sync-xw5xf" Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.239741 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b928b3-101e-4649-ae57-9857145062f0-combined-ca-bundle\") pod \"heat-db-sync-xw5xf\" (UID: \"27b928b3-101e-4649-ae57-9857145062f0\") " pod="openstack/heat-db-sync-xw5xf" Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.341903 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfv8d\" (UniqueName: \"kubernetes.io/projected/27b928b3-101e-4649-ae57-9857145062f0-kube-api-access-kfv8d\") pod \"heat-db-sync-xw5xf\" (UID: \"27b928b3-101e-4649-ae57-9857145062f0\") " pod="openstack/heat-db-sync-xw5xf" Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.342016 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b928b3-101e-4649-ae57-9857145062f0-config-data\") pod \"heat-db-sync-xw5xf\" (UID: \"27b928b3-101e-4649-ae57-9857145062f0\") " pod="openstack/heat-db-sync-xw5xf" Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.342040 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b928b3-101e-4649-ae57-9857145062f0-combined-ca-bundle\") pod \"heat-db-sync-xw5xf\" (UID: \"27b928b3-101e-4649-ae57-9857145062f0\") " pod="openstack/heat-db-sync-xw5xf" Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.367589 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b928b3-101e-4649-ae57-9857145062f0-combined-ca-bundle\") pod \"heat-db-sync-xw5xf\" (UID: \"27b928b3-101e-4649-ae57-9857145062f0\") " pod="openstack/heat-db-sync-xw5xf" Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.367853 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b928b3-101e-4649-ae57-9857145062f0-config-data\") pod \"heat-db-sync-xw5xf\" (UID: \"27b928b3-101e-4649-ae57-9857145062f0\") " pod="openstack/heat-db-sync-xw5xf" Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.370357 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfv8d\" (UniqueName: \"kubernetes.io/projected/27b928b3-101e-4649-ae57-9857145062f0-kube-api-access-kfv8d\") pod \"heat-db-sync-xw5xf\" (UID: \"27b928b3-101e-4649-ae57-9857145062f0\") " pod="openstack/heat-db-sync-xw5xf" Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.407178 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-xw5xf" Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.990751 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1051dd3c-5d30-47f1-8162-3a3e9d5ee271" path="/var/lib/kubelet/pods/1051dd3c-5d30-47f1-8162-3a3e9d5ee271/volumes" Jan 30 21:42:26 crc kubenswrapper[4751]: W0130 21:42:26.009788 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27b928b3_101e_4649_ae57_9857145062f0.slice/crio-404e00df247a8af350a6ca415370390444a1c5afadb4e6e6be032527b135bac6 WatchSource:0}: Error finding container 404e00df247a8af350a6ca415370390444a1c5afadb4e6e6be032527b135bac6: Status 404 returned error can't find the container with id 404e00df247a8af350a6ca415370390444a1c5afadb4e6e6be032527b135bac6 Jan 30 21:42:26 crc kubenswrapper[4751]: I0130 21:42:26.012304 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-xw5xf"] Jan 30 21:42:26 crc kubenswrapper[4751]: I0130 21:42:26.321502 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-xw5xf" event={"ID":"27b928b3-101e-4649-ae57-9857145062f0","Type":"ContainerStarted","Data":"404e00df247a8af350a6ca415370390444a1c5afadb4e6e6be032527b135bac6"} Jan 30 21:42:27 crc kubenswrapper[4751]: I0130 21:42:27.006526 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 30 21:42:27 crc kubenswrapper[4751]: I0130 21:42:27.608920 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:42:27 crc kubenswrapper[4751]: I0130 21:42:27.609548 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="ceilometer-central-agent" containerID="cri-o://ade08d2c4232a23eeb9d69dc022cccce47f3c7ec68732285f62203b01157f6a1" gracePeriod=30 Jan 30 21:42:27 crc kubenswrapper[4751]: I0130 21:42:27.609705 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="ceilometer-notification-agent" containerID="cri-o://a0f726c7cd142e46e1744145e306459683e54f9dc50a1f555f6aa463adc70875" gracePeriod=30 Jan 30 21:42:27 crc kubenswrapper[4751]: I0130 21:42:27.609706 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="sg-core" containerID="cri-o://0e998353f70d72cf83742d5c19ae2a01ada43522339ce1c3fa0dee4d1c39d534" gracePeriod=30 Jan 30 21:42:27 crc kubenswrapper[4751]: I0130 21:42:27.609820 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="proxy-httpd" containerID="cri-o://4c807eb39138fcbab426b7e6182ee8892e0e25c2dc8b96a85aed38bf80f64725" gracePeriod=30 Jan 30 21:42:28 crc kubenswrapper[4751]: I0130 21:42:28.129297 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 21:42:28 crc kubenswrapper[4751]: I0130 21:42:28.351213 4751 generic.go:334] "Generic (PLEG): container finished" podID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerID="4c807eb39138fcbab426b7e6182ee8892e0e25c2dc8b96a85aed38bf80f64725" exitCode=0 Jan 30 21:42:28 crc kubenswrapper[4751]: I0130 21:42:28.351753 4751 generic.go:334] "Generic (PLEG): container finished" podID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerID="0e998353f70d72cf83742d5c19ae2a01ada43522339ce1c3fa0dee4d1c39d534" exitCode=2 Jan 30 21:42:28 crc kubenswrapper[4751]: I0130 21:42:28.351768 4751 generic.go:334] "Generic (PLEG): container finished" podID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerID="ade08d2c4232a23eeb9d69dc022cccce47f3c7ec68732285f62203b01157f6a1" exitCode=0 Jan 30 21:42:28 crc kubenswrapper[4751]: I0130 21:42:28.351266 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc6664ba-7684-43bb-a51d-9e508b308b3d","Type":"ContainerDied","Data":"4c807eb39138fcbab426b7e6182ee8892e0e25c2dc8b96a85aed38bf80f64725"} Jan 30 21:42:28 crc kubenswrapper[4751]: I0130 21:42:28.351807 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc6664ba-7684-43bb-a51d-9e508b308b3d","Type":"ContainerDied","Data":"0e998353f70d72cf83742d5c19ae2a01ada43522339ce1c3fa0dee4d1c39d534"} Jan 30 21:42:28 crc kubenswrapper[4751]: I0130 21:42:28.351825 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc6664ba-7684-43bb-a51d-9e508b308b3d","Type":"ContainerDied","Data":"ade08d2c4232a23eeb9d69dc022cccce47f3c7ec68732285f62203b01157f6a1"} Jan 30 21:42:29 crc kubenswrapper[4751]: I0130 21:42:29.978437 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.120045 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-combined-ca-bundle\") pod \"dc6664ba-7684-43bb-a51d-9e508b308b3d\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.120406 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-ceilometer-tls-certs\") pod \"dc6664ba-7684-43bb-a51d-9e508b308b3d\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.120513 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-config-data\") pod \"dc6664ba-7684-43bb-a51d-9e508b308b3d\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.120550 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc6664ba-7684-43bb-a51d-9e508b308b3d-log-httpd\") pod \"dc6664ba-7684-43bb-a51d-9e508b308b3d\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.120569 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85tk4\" (UniqueName: \"kubernetes.io/projected/dc6664ba-7684-43bb-a51d-9e508b308b3d-kube-api-access-85tk4\") pod \"dc6664ba-7684-43bb-a51d-9e508b308b3d\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.120648 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc6664ba-7684-43bb-a51d-9e508b308b3d-run-httpd\") pod \"dc6664ba-7684-43bb-a51d-9e508b308b3d\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.120831 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-sg-core-conf-yaml\") pod \"dc6664ba-7684-43bb-a51d-9e508b308b3d\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.120867 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-scripts\") pod \"dc6664ba-7684-43bb-a51d-9e508b308b3d\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.124770 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc6664ba-7684-43bb-a51d-9e508b308b3d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dc6664ba-7684-43bb-a51d-9e508b308b3d" (UID: "dc6664ba-7684-43bb-a51d-9e508b308b3d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.126644 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc6664ba-7684-43bb-a51d-9e508b308b3d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dc6664ba-7684-43bb-a51d-9e508b308b3d" (UID: "dc6664ba-7684-43bb-a51d-9e508b308b3d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.130423 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-scripts" (OuterVolumeSpecName: "scripts") pod "dc6664ba-7684-43bb-a51d-9e508b308b3d" (UID: "dc6664ba-7684-43bb-a51d-9e508b308b3d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.158558 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc6664ba-7684-43bb-a51d-9e508b308b3d-kube-api-access-85tk4" (OuterVolumeSpecName: "kube-api-access-85tk4") pod "dc6664ba-7684-43bb-a51d-9e508b308b3d" (UID: "dc6664ba-7684-43bb-a51d-9e508b308b3d"). InnerVolumeSpecName "kube-api-access-85tk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.212651 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dc6664ba-7684-43bb-a51d-9e508b308b3d" (UID: "dc6664ba-7684-43bb-a51d-9e508b308b3d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.224362 4751 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.224402 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.224416 4751 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc6664ba-7684-43bb-a51d-9e508b308b3d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.224427 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85tk4\" (UniqueName: \"kubernetes.io/projected/dc6664ba-7684-43bb-a51d-9e508b308b3d-kube-api-access-85tk4\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.224441 4751 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc6664ba-7684-43bb-a51d-9e508b308b3d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.255833 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "dc6664ba-7684-43bb-a51d-9e508b308b3d" (UID: "dc6664ba-7684-43bb-a51d-9e508b308b3d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.295178 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc6664ba-7684-43bb-a51d-9e508b308b3d" (UID: "dc6664ba-7684-43bb-a51d-9e508b308b3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.326654 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.326684 4751 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.369562 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-config-data" (OuterVolumeSpecName: "config-data") pod "dc6664ba-7684-43bb-a51d-9e508b308b3d" (UID: "dc6664ba-7684-43bb-a51d-9e508b308b3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.380818 4751 generic.go:334] "Generic (PLEG): container finished" podID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerID="a0f726c7cd142e46e1744145e306459683e54f9dc50a1f555f6aa463adc70875" exitCode=0 Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.380882 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc6664ba-7684-43bb-a51d-9e508b308b3d","Type":"ContainerDied","Data":"a0f726c7cd142e46e1744145e306459683e54f9dc50a1f555f6aa463adc70875"} Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.380914 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc6664ba-7684-43bb-a51d-9e508b308b3d","Type":"ContainerDied","Data":"f0a52aaf12d629a20b40a1b124115c7c8878ba6a296c7b2005aac82e96f89bf6"} Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.380931 4751 scope.go:117] "RemoveContainer" containerID="4c807eb39138fcbab426b7e6182ee8892e0e25c2dc8b96a85aed38bf80f64725" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.381112 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.425257 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.426468 4751 scope.go:117] "RemoveContainer" containerID="0e998353f70d72cf83742d5c19ae2a01ada43522339ce1c3fa0dee4d1c39d534" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.433556 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.481278 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.489165 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:42:30 crc kubenswrapper[4751]: E0130 21:42:30.489707 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="proxy-httpd" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.489725 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="proxy-httpd" Jan 30 21:42:30 crc kubenswrapper[4751]: E0130 21:42:30.489740 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="sg-core" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.489745 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="sg-core" Jan 30 21:42:30 crc kubenswrapper[4751]: E0130 21:42:30.489768 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="ceilometer-notification-agent" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.489775 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="ceilometer-notification-agent" Jan 30 21:42:30 crc kubenswrapper[4751]: E0130 21:42:30.489789 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="ceilometer-central-agent" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.489798 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="ceilometer-central-agent" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.490010 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="ceilometer-notification-agent" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.490021 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="ceilometer-central-agent" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.490035 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="sg-core" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.490057 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="proxy-httpd" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.500534 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.504956 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.505035 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.505134 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.524268 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.549319 4751 scope.go:117] "RemoveContainer" containerID="a0f726c7cd142e46e1744145e306459683e54f9dc50a1f555f6aa463adc70875" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.593810 4751 scope.go:117] "RemoveContainer" containerID="ade08d2c4232a23eeb9d69dc022cccce47f3c7ec68732285f62203b01157f6a1" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.635525 4751 scope.go:117] "RemoveContainer" containerID="4c807eb39138fcbab426b7e6182ee8892e0e25c2dc8b96a85aed38bf80f64725" Jan 30 21:42:30 crc kubenswrapper[4751]: E0130 21:42:30.635912 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c807eb39138fcbab426b7e6182ee8892e0e25c2dc8b96a85aed38bf80f64725\": container with ID starting with 4c807eb39138fcbab426b7e6182ee8892e0e25c2dc8b96a85aed38bf80f64725 not found: ID does not exist" containerID="4c807eb39138fcbab426b7e6182ee8892e0e25c2dc8b96a85aed38bf80f64725" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.635946 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c807eb39138fcbab426b7e6182ee8892e0e25c2dc8b96a85aed38bf80f64725"} err="failed to get container status \"4c807eb39138fcbab426b7e6182ee8892e0e25c2dc8b96a85aed38bf80f64725\": rpc error: code = NotFound desc = could not find container \"4c807eb39138fcbab426b7e6182ee8892e0e25c2dc8b96a85aed38bf80f64725\": container with ID starting with 4c807eb39138fcbab426b7e6182ee8892e0e25c2dc8b96a85aed38bf80f64725 not found: ID does not exist" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.635970 4751 scope.go:117] "RemoveContainer" containerID="0e998353f70d72cf83742d5c19ae2a01ada43522339ce1c3fa0dee4d1c39d534" Jan 30 21:42:30 crc kubenswrapper[4751]: E0130 21:42:30.636676 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e998353f70d72cf83742d5c19ae2a01ada43522339ce1c3fa0dee4d1c39d534\": container with ID starting with 0e998353f70d72cf83742d5c19ae2a01ada43522339ce1c3fa0dee4d1c39d534 not found: ID does not exist" containerID="0e998353f70d72cf83742d5c19ae2a01ada43522339ce1c3fa0dee4d1c39d534" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.636698 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e998353f70d72cf83742d5c19ae2a01ada43522339ce1c3fa0dee4d1c39d534"} err="failed to get container status \"0e998353f70d72cf83742d5c19ae2a01ada43522339ce1c3fa0dee4d1c39d534\": rpc error: code = NotFound desc = could not find container \"0e998353f70d72cf83742d5c19ae2a01ada43522339ce1c3fa0dee4d1c39d534\": container with ID starting with 0e998353f70d72cf83742d5c19ae2a01ada43522339ce1c3fa0dee4d1c39d534 not found: ID does not exist" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.636716 4751 scope.go:117] "RemoveContainer" containerID="a0f726c7cd142e46e1744145e306459683e54f9dc50a1f555f6aa463adc70875" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.636895 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-config-data\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.636965 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: E0130 21:42:30.636986 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0f726c7cd142e46e1744145e306459683e54f9dc50a1f555f6aa463adc70875\": container with ID starting with a0f726c7cd142e46e1744145e306459683e54f9dc50a1f555f6aa463adc70875 not found: ID does not exist" containerID="a0f726c7cd142e46e1744145e306459683e54f9dc50a1f555f6aa463adc70875" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.637010 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0f726c7cd142e46e1744145e306459683e54f9dc50a1f555f6aa463adc70875"} err="failed to get container status \"a0f726c7cd142e46e1744145e306459683e54f9dc50a1f555f6aa463adc70875\": rpc error: code = NotFound desc = could not find container \"a0f726c7cd142e46e1744145e306459683e54f9dc50a1f555f6aa463adc70875\": container with ID starting with a0f726c7cd142e46e1744145e306459683e54f9dc50a1f555f6aa463adc70875 not found: ID does not exist" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.637028 4751 scope.go:117] "RemoveContainer" containerID="ade08d2c4232a23eeb9d69dc022cccce47f3c7ec68732285f62203b01157f6a1" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.636998 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c69dc070-7de6-4681-a44b-6e2007a7f671-log-httpd\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.637100 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-scripts\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.637122 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.637181 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.637250 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8m7w\" (UniqueName: \"kubernetes.io/projected/c69dc070-7de6-4681-a44b-6e2007a7f671-kube-api-access-x8m7w\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.637412 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c69dc070-7de6-4681-a44b-6e2007a7f671-run-httpd\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: E0130 21:42:30.639385 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ade08d2c4232a23eeb9d69dc022cccce47f3c7ec68732285f62203b01157f6a1\": container with ID starting with ade08d2c4232a23eeb9d69dc022cccce47f3c7ec68732285f62203b01157f6a1 not found: ID does not exist" containerID="ade08d2c4232a23eeb9d69dc022cccce47f3c7ec68732285f62203b01157f6a1" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.639419 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ade08d2c4232a23eeb9d69dc022cccce47f3c7ec68732285f62203b01157f6a1"} err="failed to get container status \"ade08d2c4232a23eeb9d69dc022cccce47f3c7ec68732285f62203b01157f6a1\": rpc error: code = NotFound desc = could not find container \"ade08d2c4232a23eeb9d69dc022cccce47f3c7ec68732285f62203b01157f6a1\": container with ID starting with ade08d2c4232a23eeb9d69dc022cccce47f3c7ec68732285f62203b01157f6a1 not found: ID does not exist" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.738894 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c69dc070-7de6-4681-a44b-6e2007a7f671-run-httpd\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.739232 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-config-data\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.739298 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.739343 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c69dc070-7de6-4681-a44b-6e2007a7f671-log-httpd\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.739361 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-scripts\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.739376 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.739416 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.739467 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8m7w\" (UniqueName: \"kubernetes.io/projected/c69dc070-7de6-4681-a44b-6e2007a7f671-kube-api-access-x8m7w\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.739480 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c69dc070-7de6-4681-a44b-6e2007a7f671-run-httpd\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.739782 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c69dc070-7de6-4681-a44b-6e2007a7f671-log-httpd\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.746362 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-scripts\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.746380 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.747610 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.747928 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-config-data\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.758992 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.778390 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8m7w\" (UniqueName: \"kubernetes.io/projected/c69dc070-7de6-4681-a44b-6e2007a7f671-kube-api-access-x8m7w\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.830871 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:42:31 crc kubenswrapper[4751]: I0130 21:42:31.511008 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:42:31 crc kubenswrapper[4751]: I0130 21:42:31.984763 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:42:31 crc kubenswrapper[4751]: E0130 21:42:31.985091 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:42:32 crc kubenswrapper[4751]: I0130 21:42:32.018743 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" path="/var/lib/kubelet/pods/dc6664ba-7684-43bb-a51d-9e508b308b3d/volumes" Jan 30 21:42:32 crc kubenswrapper[4751]: I0130 21:42:32.328923 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="192a5913-0c28-4214-9ac0-d37ca2eeb34c" containerName="rabbitmq" containerID="cri-o://8694789fa0038f6976a755ccc1f09ff5edec94cba32aab400030d4cae96b540d" gracePeriod=604795 Jan 30 21:42:32 crc kubenswrapper[4751]: I0130 21:42:32.408611 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c69dc070-7de6-4681-a44b-6e2007a7f671","Type":"ContainerStarted","Data":"d210c137bcb42ac00e7e4b52dbd153bb38b8dbaeac853a4d911624e47f822e61"} Jan 30 21:42:33 crc kubenswrapper[4751]: I0130 21:42:33.110094 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="61d75daf-41cb-4ab5-b849-c98080ca748b" containerName="rabbitmq" containerID="cri-o://654aa5cd180d3480262a0eb6327c9c516fd2aafbea0de4e5b807e47db7d88dd1" gracePeriod=604796 Jan 30 21:42:40 crc kubenswrapper[4751]: I0130 21:42:40.390146 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="192a5913-0c28-4214-9ac0-d37ca2eeb34c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.135:5671: connect: connection refused" Jan 30 21:42:40 crc kubenswrapper[4751]: I0130 21:42:40.429053 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="61d75daf-41cb-4ab5-b849-c98080ca748b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.137:5671: connect: connection refused" Jan 30 21:42:40 crc kubenswrapper[4751]: I0130 21:42:40.514736 4751 generic.go:334] "Generic (PLEG): container finished" podID="61d75daf-41cb-4ab5-b849-c98080ca748b" containerID="654aa5cd180d3480262a0eb6327c9c516fd2aafbea0de4e5b807e47db7d88dd1" exitCode=0 Jan 30 21:42:40 crc kubenswrapper[4751]: I0130 21:42:40.514820 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"61d75daf-41cb-4ab5-b849-c98080ca748b","Type":"ContainerDied","Data":"654aa5cd180d3480262a0eb6327c9c516fd2aafbea0de4e5b807e47db7d88dd1"} Jan 30 21:42:40 crc kubenswrapper[4751]: I0130 21:42:40.517142 4751 generic.go:334] "Generic (PLEG): container finished" podID="192a5913-0c28-4214-9ac0-d37ca2eeb34c" containerID="8694789fa0038f6976a755ccc1f09ff5edec94cba32aab400030d4cae96b540d" exitCode=0 Jan 30 21:42:40 crc kubenswrapper[4751]: I0130 21:42:40.517205 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"192a5913-0c28-4214-9ac0-d37ca2eeb34c","Type":"ContainerDied","Data":"8694789fa0038f6976a755ccc1f09ff5edec94cba32aab400030d4cae96b540d"} Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.532856 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"192a5913-0c28-4214-9ac0-d37ca2eeb34c","Type":"ContainerDied","Data":"e8cf5c49c1669ca82eb54f5065d6c41864e69f1712fef33128cb50eb2e139821"} Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.533418 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8cf5c49c1669ca82eb54f5065d6c41864e69f1712fef33128cb50eb2e139821" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.619135 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.719610 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\") pod \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.719679 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-confd\") pod \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.719727 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/192a5913-0c28-4214-9ac0-d37ca2eeb34c-erlang-cookie-secret\") pod \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.719778 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-config-data\") pod \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.719815 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-plugins\") pod \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.719952 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-tls\") pod \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.720028 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-erlang-cookie\") pod \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.720088 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlq5r\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-kube-api-access-nlq5r\") pod \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.720109 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/192a5913-0c28-4214-9ac0-d37ca2eeb34c-pod-info\") pod \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.720177 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-server-conf\") pod \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.720226 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-plugins-conf\") pod \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.721737 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "192a5913-0c28-4214-9ac0-d37ca2eeb34c" (UID: "192a5913-0c28-4214-9ac0-d37ca2eeb34c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.722387 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "192a5913-0c28-4214-9ac0-d37ca2eeb34c" (UID: "192a5913-0c28-4214-9ac0-d37ca2eeb34c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.725220 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "192a5913-0c28-4214-9ac0-d37ca2eeb34c" (UID: "192a5913-0c28-4214-9ac0-d37ca2eeb34c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.741018 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "192a5913-0c28-4214-9ac0-d37ca2eeb34c" (UID: "192a5913-0c28-4214-9ac0-d37ca2eeb34c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.741552 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/192a5913-0c28-4214-9ac0-d37ca2eeb34c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "192a5913-0c28-4214-9ac0-d37ca2eeb34c" (UID: "192a5913-0c28-4214-9ac0-d37ca2eeb34c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.744599 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/192a5913-0c28-4214-9ac0-d37ca2eeb34c-pod-info" (OuterVolumeSpecName: "pod-info") pod "192a5913-0c28-4214-9ac0-d37ca2eeb34c" (UID: "192a5913-0c28-4214-9ac0-d37ca2eeb34c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.753804 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-kube-api-access-nlq5r" (OuterVolumeSpecName: "kube-api-access-nlq5r") pod "192a5913-0c28-4214-9ac0-d37ca2eeb34c" (UID: "192a5913-0c28-4214-9ac0-d37ca2eeb34c"). InnerVolumeSpecName "kube-api-access-nlq5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.787241 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-config-data" (OuterVolumeSpecName: "config-data") pod "192a5913-0c28-4214-9ac0-d37ca2eeb34c" (UID: "192a5913-0c28-4214-9ac0-d37ca2eeb34c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.798980 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3070f15-4c27-478d-9eeb-e56a8b304832" (OuterVolumeSpecName: "persistence") pod "192a5913-0c28-4214-9ac0-d37ca2eeb34c" (UID: "192a5913-0c28-4214-9ac0-d37ca2eeb34c"). InnerVolumeSpecName "pvc-e3070f15-4c27-478d-9eeb-e56a8b304832". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.825719 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.825751 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlq5r\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-kube-api-access-nlq5r\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.825762 4751 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/192a5913-0c28-4214-9ac0-d37ca2eeb34c-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.825772 4751 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.825804 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\") on node \"crc\" " Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.825816 4751 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/192a5913-0c28-4214-9ac0-d37ca2eeb34c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.825824 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.825832 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.825870 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.826238 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-server-conf" (OuterVolumeSpecName: "server-conf") pod "192a5913-0c28-4214-9ac0-d37ca2eeb34c" (UID: "192a5913-0c28-4214-9ac0-d37ca2eeb34c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.887805 4751 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.887966 4751 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e3070f15-4c27-478d-9eeb-e56a8b304832" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3070f15-4c27-478d-9eeb-e56a8b304832") on node "crc" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.928922 4751 reconciler_common.go:293] "Volume detached for volume \"pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.928955 4751 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.055508 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "192a5913-0c28-4214-9ac0-d37ca2eeb34c" (UID: "192a5913-0c28-4214-9ac0-d37ca2eeb34c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.126835 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-wkszw"] Jan 30 21:42:42 crc kubenswrapper[4751]: E0130 21:42:42.127433 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192a5913-0c28-4214-9ac0-d37ca2eeb34c" containerName="setup-container" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.127451 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="192a5913-0c28-4214-9ac0-d37ca2eeb34c" containerName="setup-container" Jan 30 21:42:42 crc kubenswrapper[4751]: E0130 21:42:42.127463 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192a5913-0c28-4214-9ac0-d37ca2eeb34c" containerName="rabbitmq" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.127469 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="192a5913-0c28-4214-9ac0-d37ca2eeb34c" containerName="rabbitmq" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.127686 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="192a5913-0c28-4214-9ac0-d37ca2eeb34c" containerName="rabbitmq" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.128927 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.133900 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.134178 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.150655 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-wkszw"] Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.238139 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-config\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.238240 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.238286 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkx89\" (UniqueName: \"kubernetes.io/projected/56ea54f1-23d8-4e09-b159-bd66a7bb5618-kube-api-access-zkx89\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.238371 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.238426 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.238489 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.238524 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.341076 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-config\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.341173 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.341217 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkx89\" (UniqueName: \"kubernetes.io/projected/56ea54f1-23d8-4e09-b159-bd66a7bb5618-kube-api-access-zkx89\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.341427 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.341497 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.341598 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.341723 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.342415 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-config\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.342736 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.342842 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.342906 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.342901 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.343242 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.367543 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkx89\" (UniqueName: \"kubernetes.io/projected/56ea54f1-23d8-4e09-b159-bd66a7bb5618-kube-api-access-zkx89\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.466599 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.541300 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.581335 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.596528 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.613102 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.615297 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.647144 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.750100 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/29afad92-51c9-45a8-a6a0-ed64925f91f3-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.750311 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.750554 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/29afad92-51c9-45a8-a6a0-ed64925f91f3-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.750592 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/29afad92-51c9-45a8-a6a0-ed64925f91f3-server-conf\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.750618 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/29afad92-51c9-45a8-a6a0-ed64925f91f3-pod-info\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.750655 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/29afad92-51c9-45a8-a6a0-ed64925f91f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.751243 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29afad92-51c9-45a8-a6a0-ed64925f91f3-config-data\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.751352 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/29afad92-51c9-45a8-a6a0-ed64925f91f3-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.751382 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/29afad92-51c9-45a8-a6a0-ed64925f91f3-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.751657 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/29afad92-51c9-45a8-a6a0-ed64925f91f3-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.751786 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69ztt\" (UniqueName: \"kubernetes.io/projected/29afad92-51c9-45a8-a6a0-ed64925f91f3-kube-api-access-69ztt\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.853419 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/29afad92-51c9-45a8-a6a0-ed64925f91f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.853503 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29afad92-51c9-45a8-a6a0-ed64925f91f3-config-data\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.853534 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/29afad92-51c9-45a8-a6a0-ed64925f91f3-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.853556 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/29afad92-51c9-45a8-a6a0-ed64925f91f3-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.853638 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/29afad92-51c9-45a8-a6a0-ed64925f91f3-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.853675 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69ztt\" (UniqueName: \"kubernetes.io/projected/29afad92-51c9-45a8-a6a0-ed64925f91f3-kube-api-access-69ztt\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.854140 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/29afad92-51c9-45a8-a6a0-ed64925f91f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.854422 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/29afad92-51c9-45a8-a6a0-ed64925f91f3-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.854473 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.854539 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/29afad92-51c9-45a8-a6a0-ed64925f91f3-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.854562 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/29afad92-51c9-45a8-a6a0-ed64925f91f3-server-conf\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.854563 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/29afad92-51c9-45a8-a6a0-ed64925f91f3-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.854597 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/29afad92-51c9-45a8-a6a0-ed64925f91f3-pod-info\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.854908 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/29afad92-51c9-45a8-a6a0-ed64925f91f3-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.855324 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29afad92-51c9-45a8-a6a0-ed64925f91f3-config-data\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.855762 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/29afad92-51c9-45a8-a6a0-ed64925f91f3-server-conf\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.858648 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/29afad92-51c9-45a8-a6a0-ed64925f91f3-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.859391 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/29afad92-51c9-45a8-a6a0-ed64925f91f3-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.859740 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/29afad92-51c9-45a8-a6a0-ed64925f91f3-pod-info\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.863932 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.863961 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/38819e4ab89b59440f000d1a076c7489b3d13c82621db763cbf8d17a6b6689f4/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.873184 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69ztt\" (UniqueName: \"kubernetes.io/projected/29afad92-51c9-45a8-a6a0-ed64925f91f3-kube-api-access-69ztt\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.875681 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/29afad92-51c9-45a8-a6a0-ed64925f91f3-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.973512 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:43 crc kubenswrapper[4751]: I0130 21:42:43.246829 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 30 21:42:43 crc kubenswrapper[4751]: I0130 21:42:43.990310 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="192a5913-0c28-4214-9ac0-d37ca2eeb34c" path="/var/lib/kubelet/pods/192a5913-0c28-4214-9ac0-d37ca2eeb34c/volumes" Jan 30 21:42:46 crc kubenswrapper[4751]: I0130 21:42:46.975907 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:42:46 crc kubenswrapper[4751]: E0130 21:42:46.976995 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.341841 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.430400 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/61d75daf-41cb-4ab5-b849-c98080ca748b-pod-info\") pod \"61d75daf-41cb-4ab5-b849-c98080ca748b\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.430719 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-tls\") pod \"61d75daf-41cb-4ab5-b849-c98080ca748b\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.430754 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-server-conf\") pod \"61d75daf-41cb-4ab5-b849-c98080ca748b\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.430838 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-confd\") pod \"61d75daf-41cb-4ab5-b849-c98080ca748b\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.430943 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-config-data\") pod \"61d75daf-41cb-4ab5-b849-c98080ca748b\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.431078 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-erlang-cookie\") pod \"61d75daf-41cb-4ab5-b849-c98080ca748b\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.431132 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-plugins\") pod \"61d75daf-41cb-4ab5-b849-c98080ca748b\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.431160 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/61d75daf-41cb-4ab5-b849-c98080ca748b-erlang-cookie-secret\") pod \"61d75daf-41cb-4ab5-b849-c98080ca748b\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.431211 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-plugins-conf\") pod \"61d75daf-41cb-4ab5-b849-c98080ca748b\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.431269 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hw2d\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-kube-api-access-8hw2d\") pod \"61d75daf-41cb-4ab5-b849-c98080ca748b\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.431875 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\") pod \"61d75daf-41cb-4ab5-b849-c98080ca748b\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.439060 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/61d75daf-41cb-4ab5-b849-c98080ca748b-pod-info" (OuterVolumeSpecName: "pod-info") pod "61d75daf-41cb-4ab5-b849-c98080ca748b" (UID: "61d75daf-41cb-4ab5-b849-c98080ca748b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.441052 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "61d75daf-41cb-4ab5-b849-c98080ca748b" (UID: "61d75daf-41cb-4ab5-b849-c98080ca748b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.441384 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "61d75daf-41cb-4ab5-b849-c98080ca748b" (UID: "61d75daf-41cb-4ab5-b849-c98080ca748b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.445824 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "61d75daf-41cb-4ab5-b849-c98080ca748b" (UID: "61d75daf-41cb-4ab5-b849-c98080ca748b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.446618 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61d75daf-41cb-4ab5-b849-c98080ca748b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "61d75daf-41cb-4ab5-b849-c98080ca748b" (UID: "61d75daf-41cb-4ab5-b849-c98080ca748b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.448813 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "61d75daf-41cb-4ab5-b849-c98080ca748b" (UID: "61d75daf-41cb-4ab5-b849-c98080ca748b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.449518 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-kube-api-access-8hw2d" (OuterVolumeSpecName: "kube-api-access-8hw2d") pod "61d75daf-41cb-4ab5-b849-c98080ca748b" (UID: "61d75daf-41cb-4ab5-b849-c98080ca748b"). InnerVolumeSpecName "kube-api-access-8hw2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.495500 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-415c3201-a3f6-4e58-8696-79f9797a5e98" (OuterVolumeSpecName: "persistence") pod "61d75daf-41cb-4ab5-b849-c98080ca748b" (UID: "61d75daf-41cb-4ab5-b849-c98080ca748b"). InnerVolumeSpecName "pvc-415c3201-a3f6-4e58-8696-79f9797a5e98". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.518352 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-config-data" (OuterVolumeSpecName: "config-data") pod "61d75daf-41cb-4ab5-b849-c98080ca748b" (UID: "61d75daf-41cb-4ab5-b849-c98080ca748b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.537105 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.541376 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.541541 4751 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/61d75daf-41cb-4ab5-b849-c98080ca748b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.541604 4751 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.541671 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hw2d\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-kube-api-access-8hw2d\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.541764 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\") on node \"crc\" " Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.541842 4751 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/61d75daf-41cb-4ab5-b849-c98080ca748b-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.541902 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.541962 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.588881 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-server-conf" (OuterVolumeSpecName: "server-conf") pod "61d75daf-41cb-4ab5-b849-c98080ca748b" (UID: "61d75daf-41cb-4ab5-b849-c98080ca748b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.605067 4751 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.605213 4751 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-415c3201-a3f6-4e58-8696-79f9797a5e98" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-415c3201-a3f6-4e58-8696-79f9797a5e98") on node "crc" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.641880 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "61d75daf-41cb-4ab5-b849-c98080ca748b" (UID: "61d75daf-41cb-4ab5-b849-c98080ca748b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.645643 4751 reconciler_common.go:293] "Volume detached for volume \"pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.645833 4751 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.645920 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.650941 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"61d75daf-41cb-4ab5-b849-c98080ca748b","Type":"ContainerDied","Data":"f4f06c01fc35225b23f5f598399e00ef90da1d1a2d96b3cf839a507f64a8e8e3"} Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.651030 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.651141 4751 scope.go:117] "RemoveContainer" containerID="654aa5cd180d3480262a0eb6327c9c516fd2aafbea0de4e5b807e47db7d88dd1" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.732205 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.771360 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.782228 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 21:42:49 crc kubenswrapper[4751]: E0130 21:42:49.782904 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61d75daf-41cb-4ab5-b849-c98080ca748b" containerName="rabbitmq" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.782933 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="61d75daf-41cb-4ab5-b849-c98080ca748b" containerName="rabbitmq" Jan 30 21:42:49 crc kubenswrapper[4751]: E0130 21:42:49.782950 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61d75daf-41cb-4ab5-b849-c98080ca748b" containerName="setup-container" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.782956 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="61d75daf-41cb-4ab5-b849-c98080ca748b" containerName="setup-container" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.783193 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="61d75daf-41cb-4ab5-b849-c98080ca748b" containerName="rabbitmq" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.784761 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.786762 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qvp6f" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.787145 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.787341 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.787561 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.787674 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.787768 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.787890 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.817452 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.851366 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.851606 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aa019efa-4067-4bd5-b370-12f6a4e6b856-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.851697 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aa019efa-4067-4bd5-b370-12f6a4e6b856-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.851826 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aa019efa-4067-4bd5-b370-12f6a4e6b856-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.851903 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aa019efa-4067-4bd5-b370-12f6a4e6b856-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.852009 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aa019efa-4067-4bd5-b370-12f6a4e6b856-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.852097 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aa019efa-4067-4bd5-b370-12f6a4e6b856-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.852255 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa019efa-4067-4bd5-b370-12f6a4e6b856-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.852369 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aa019efa-4067-4bd5-b370-12f6a4e6b856-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.852446 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj8k5\" (UniqueName: \"kubernetes.io/projected/aa019efa-4067-4bd5-b370-12f6a4e6b856-kube-api-access-mj8k5\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.852686 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aa019efa-4067-4bd5-b370-12f6a4e6b856-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.954535 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa019efa-4067-4bd5-b370-12f6a4e6b856-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.954594 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aa019efa-4067-4bd5-b370-12f6a4e6b856-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.954618 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj8k5\" (UniqueName: \"kubernetes.io/projected/aa019efa-4067-4bd5-b370-12f6a4e6b856-kube-api-access-mj8k5\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.954659 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aa019efa-4067-4bd5-b370-12f6a4e6b856-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.954725 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.954746 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aa019efa-4067-4bd5-b370-12f6a4e6b856-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.954765 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aa019efa-4067-4bd5-b370-12f6a4e6b856-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.954784 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aa019efa-4067-4bd5-b370-12f6a4e6b856-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.954801 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aa019efa-4067-4bd5-b370-12f6a4e6b856-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.954835 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aa019efa-4067-4bd5-b370-12f6a4e6b856-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.954858 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aa019efa-4067-4bd5-b370-12f6a4e6b856-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.955262 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aa019efa-4067-4bd5-b370-12f6a4e6b856-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.956298 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa019efa-4067-4bd5-b370-12f6a4e6b856-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.958592 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aa019efa-4067-4bd5-b370-12f6a4e6b856-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.959602 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aa019efa-4067-4bd5-b370-12f6a4e6b856-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.960869 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aa019efa-4067-4bd5-b370-12f6a4e6b856-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.960945 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aa019efa-4067-4bd5-b370-12f6a4e6b856-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.961265 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aa019efa-4067-4bd5-b370-12f6a4e6b856-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.961489 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.961517 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ea75936dffe846fa8fe6e7d04e4555ffbed93863b04fcd828432921ea88ef24a/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.961889 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aa019efa-4067-4bd5-b370-12f6a4e6b856-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.979069 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aa019efa-4067-4bd5-b370-12f6a4e6b856-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.981653 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj8k5\" (UniqueName: \"kubernetes.io/projected/aa019efa-4067-4bd5-b370-12f6a4e6b856-kube-api-access-mj8k5\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.999394 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61d75daf-41cb-4ab5-b849-c98080ca748b" path="/var/lib/kubelet/pods/61d75daf-41cb-4ab5-b849-c98080ca748b/volumes" Jan 30 21:42:50 crc kubenswrapper[4751]: I0130 21:42:50.029221 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:50 crc kubenswrapper[4751]: I0130 21:42:50.110350 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:52 crc kubenswrapper[4751]: E0130 21:42:52.875238 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 30 21:42:52 crc kubenswrapper[4751]: E0130 21:42:52.876620 4751 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 30 21:42:52 crc kubenswrapper[4751]: E0130 21:42:52.876751 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kfv8d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-xw5xf_openstack(27b928b3-101e-4649-ae57-9857145062f0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:42:52 crc kubenswrapper[4751]: E0130 21:42:52.877946 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-xw5xf" podUID="27b928b3-101e-4649-ae57-9857145062f0" Jan 30 21:42:53 crc kubenswrapper[4751]: I0130 21:42:53.306074 4751 scope.go:117] "RemoveContainer" containerID="cf3b264e8ec141124dc8cea806067e0197228587097f1a72076d1d5e3beee32f" Jan 30 21:42:53 crc kubenswrapper[4751]: E0130 21:42:53.379743 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Jan 30 21:42:53 crc kubenswrapper[4751]: E0130 21:42:53.379834 4751 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Jan 30 21:42:53 crc kubenswrapper[4751]: E0130 21:42:53.379973 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8h5fdh8hf9h5f6h647h79hffh5c9h546h67dh9h57fhb9h5f5h589h559h56dh9dhf6h78h657h5d5hb6h545h548h64dh64h686h5fdh64h584q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8m7w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(c69dc070-7de6-4681-a44b-6e2007a7f671): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:42:53 crc kubenswrapper[4751]: E0130 21:42:53.709167 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-xw5xf" podUID="27b928b3-101e-4649-ae57-9857145062f0" Jan 30 21:42:53 crc kubenswrapper[4751]: I0130 21:42:53.856941 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-wkszw"] Jan 30 21:42:53 crc kubenswrapper[4751]: I0130 21:42:53.868892 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 30 21:42:53 crc kubenswrapper[4751]: W0130 21:42:53.874564 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29afad92_51c9_45a8_a6a0_ed64925f91f3.slice/crio-8b3e976bf7ba71f1dec107ee762a8a5db7c6c511e6ae106509a4027d27b060d9 WatchSource:0}: Error finding container 8b3e976bf7ba71f1dec107ee762a8a5db7c6c511e6ae106509a4027d27b060d9: Status 404 returned error can't find the container with id 8b3e976bf7ba71f1dec107ee762a8a5db7c6c511e6ae106509a4027d27b060d9 Jan 30 21:42:54 crc kubenswrapper[4751]: I0130 21:42:54.014491 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 21:42:54 crc kubenswrapper[4751]: W0130 21:42:54.018359 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa019efa_4067_4bd5_b370_12f6a4e6b856.slice/crio-e25a35954b0e2f7dc7b2823f65739f2a45ca644dd1ee4859681b189dab2afd31 WatchSource:0}: Error finding container e25a35954b0e2f7dc7b2823f65739f2a45ca644dd1ee4859681b189dab2afd31: Status 404 returned error can't find the container with id e25a35954b0e2f7dc7b2823f65739f2a45ca644dd1ee4859681b189dab2afd31 Jan 30 21:42:54 crc kubenswrapper[4751]: I0130 21:42:54.721514 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c69dc070-7de6-4681-a44b-6e2007a7f671","Type":"ContainerStarted","Data":"f23cbc773929535c267b1b565c65bd190c33497cc90203d37681c600fe7f010d"} Jan 30 21:42:54 crc kubenswrapper[4751]: I0130 21:42:54.723409 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"29afad92-51c9-45a8-a6a0-ed64925f91f3","Type":"ContainerStarted","Data":"8b3e976bf7ba71f1dec107ee762a8a5db7c6c511e6ae106509a4027d27b060d9"} Jan 30 21:42:54 crc kubenswrapper[4751]: I0130 21:42:54.724977 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"aa019efa-4067-4bd5-b370-12f6a4e6b856","Type":"ContainerStarted","Data":"e25a35954b0e2f7dc7b2823f65739f2a45ca644dd1ee4859681b189dab2afd31"} Jan 30 21:42:54 crc kubenswrapper[4751]: I0130 21:42:54.726742 4751 generic.go:334] "Generic (PLEG): container finished" podID="56ea54f1-23d8-4e09-b159-bd66a7bb5618" containerID="dc4c2bdf34d49b061d056abe71a3bb430c604f70769630df83fbe239747a1c68" exitCode=0 Jan 30 21:42:54 crc kubenswrapper[4751]: I0130 21:42:54.726774 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" event={"ID":"56ea54f1-23d8-4e09-b159-bd66a7bb5618","Type":"ContainerDied","Data":"dc4c2bdf34d49b061d056abe71a3bb430c604f70769630df83fbe239747a1c68"} Jan 30 21:42:54 crc kubenswrapper[4751]: I0130 21:42:54.726790 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" event={"ID":"56ea54f1-23d8-4e09-b159-bd66a7bb5618","Type":"ContainerStarted","Data":"f71d247a34202ab7d446d615619ab6a42ca31575f3e6d6363567457b8f0020dc"} Jan 30 21:42:55 crc kubenswrapper[4751]: I0130 21:42:55.741114 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c69dc070-7de6-4681-a44b-6e2007a7f671","Type":"ContainerStarted","Data":"db34539eb93e17c04711ee82d3a70653aa68a76b548521348f6d116019544d4b"} Jan 30 21:42:55 crc kubenswrapper[4751]: I0130 21:42:55.743458 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" event={"ID":"56ea54f1-23d8-4e09-b159-bd66a7bb5618","Type":"ContainerStarted","Data":"9bfea387009414d29fd4b757ae7280f94fc32d9703600af8a5aea204764e5f2d"} Jan 30 21:42:55 crc kubenswrapper[4751]: I0130 21:42:55.745246 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:55 crc kubenswrapper[4751]: I0130 21:42:55.773786 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" podStartSLOduration=13.77375518 podStartE2EDuration="13.77375518s" podCreationTimestamp="2026-01-30 21:42:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:55.764832069 +0000 UTC m=+1714.510654818" watchObservedRunningTime="2026-01-30 21:42:55.77375518 +0000 UTC m=+1714.519577869" Jan 30 21:42:56 crc kubenswrapper[4751]: I0130 21:42:56.783521 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"29afad92-51c9-45a8-a6a0-ed64925f91f3","Type":"ContainerStarted","Data":"a703bc2df41831a8f47a2fe3701f155f23aa568e85a94cdaaa0863e49f204a11"} Jan 30 21:42:56 crc kubenswrapper[4751]: I0130 21:42:56.801122 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"aa019efa-4067-4bd5-b370-12f6a4e6b856","Type":"ContainerStarted","Data":"b6f44bbe8cb9612ed9ca700c8e80a7500e74cb71a824141436a42322c6f2c1ec"} Jan 30 21:42:57 crc kubenswrapper[4751]: E0130 21:42:57.361610 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="c69dc070-7de6-4681-a44b-6e2007a7f671" Jan 30 21:42:57 crc kubenswrapper[4751]: I0130 21:42:57.817496 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c69dc070-7de6-4681-a44b-6e2007a7f671","Type":"ContainerStarted","Data":"0fc60174b581299430c4303e26c6f1061519111331cf423fa1f499b255f18b87"} Jan 30 21:42:57 crc kubenswrapper[4751]: I0130 21:42:57.818888 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 21:42:57 crc kubenswrapper[4751]: E0130 21:42:57.821496 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="c69dc070-7de6-4681-a44b-6e2007a7f671" Jan 30 21:42:57 crc kubenswrapper[4751]: I0130 21:42:57.977811 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:42:57 crc kubenswrapper[4751]: E0130 21:42:57.978748 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:42:58 crc kubenswrapper[4751]: E0130 21:42:58.839797 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="c69dc070-7de6-4681-a44b-6e2007a7f671" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.468576 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.525182 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-z9wt9"] Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.525427 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" podUID="294126cb-98f1-4a1b-84eb-256f24d312ec" containerName="dnsmasq-dns" containerID="cri-o://594cfabac973f7db3981d05b5c834beff59d6ed05b3a70c8219822cdac4213e7" gracePeriod=10 Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.718099 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-7h4pb"] Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.721646 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.744717 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-7h4pb"] Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.802854 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.803079 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-config\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.803115 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.803685 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.803778 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngzwh\" (UniqueName: \"kubernetes.io/projected/1eb1b0d1-2407-440a-826b-b5158aab8be3-kube-api-access-ngzwh\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.803862 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.803922 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.925819 4751 generic.go:334] "Generic (PLEG): container finished" podID="294126cb-98f1-4a1b-84eb-256f24d312ec" containerID="594cfabac973f7db3981d05b5c834beff59d6ed05b3a70c8219822cdac4213e7" exitCode=0 Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.925860 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" event={"ID":"294126cb-98f1-4a1b-84eb-256f24d312ec","Type":"ContainerDied","Data":"594cfabac973f7db3981d05b5c834beff59d6ed05b3a70c8219822cdac4213e7"} Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.926265 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.926343 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngzwh\" (UniqueName: \"kubernetes.io/projected/1eb1b0d1-2407-440a-826b-b5158aab8be3-kube-api-access-ngzwh\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.926447 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.926489 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.926535 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.926632 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-config\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.926649 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.927708 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.929029 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-config\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.929622 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.931527 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.934274 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.934791 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.962914 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngzwh\" (UniqueName: \"kubernetes.io/projected/1eb1b0d1-2407-440a-826b-b5158aab8be3-kube-api-access-ngzwh\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.070130 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.373059 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.558449 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-config\") pod \"294126cb-98f1-4a1b-84eb-256f24d312ec\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.558609 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-ovsdbserver-sb\") pod \"294126cb-98f1-4a1b-84eb-256f24d312ec\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.558664 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-dns-swift-storage-0\") pod \"294126cb-98f1-4a1b-84eb-256f24d312ec\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.558712 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-dns-svc\") pod \"294126cb-98f1-4a1b-84eb-256f24d312ec\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.558752 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-ovsdbserver-nb\") pod \"294126cb-98f1-4a1b-84eb-256f24d312ec\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.558816 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvszz\" (UniqueName: \"kubernetes.io/projected/294126cb-98f1-4a1b-84eb-256f24d312ec-kube-api-access-zvszz\") pod \"294126cb-98f1-4a1b-84eb-256f24d312ec\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.565743 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/294126cb-98f1-4a1b-84eb-256f24d312ec-kube-api-access-zvszz" (OuterVolumeSpecName: "kube-api-access-zvszz") pod "294126cb-98f1-4a1b-84eb-256f24d312ec" (UID: "294126cb-98f1-4a1b-84eb-256f24d312ec"). InnerVolumeSpecName "kube-api-access-zvszz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.641549 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "294126cb-98f1-4a1b-84eb-256f24d312ec" (UID: "294126cb-98f1-4a1b-84eb-256f24d312ec"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.643769 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-config" (OuterVolumeSpecName: "config") pod "294126cb-98f1-4a1b-84eb-256f24d312ec" (UID: "294126cb-98f1-4a1b-84eb-256f24d312ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.645925 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "294126cb-98f1-4a1b-84eb-256f24d312ec" (UID: "294126cb-98f1-4a1b-84eb-256f24d312ec"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.661037 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "294126cb-98f1-4a1b-84eb-256f24d312ec" (UID: "294126cb-98f1-4a1b-84eb-256f24d312ec"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.661581 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.661612 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.661622 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvszz\" (UniqueName: \"kubernetes.io/projected/294126cb-98f1-4a1b-84eb-256f24d312ec-kube-api-access-zvszz\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.661632 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.661641 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.683212 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "294126cb-98f1-4a1b-84eb-256f24d312ec" (UID: "294126cb-98f1-4a1b-84eb-256f24d312ec"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.764200 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.796765 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-7h4pb"] Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.940170 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" event={"ID":"1eb1b0d1-2407-440a-826b-b5158aab8be3","Type":"ContainerStarted","Data":"3466fcdc52df7c2089d6d3aa53857d5c96a39ae6774283b089964ca3f45c3a64"} Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.942747 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" event={"ID":"294126cb-98f1-4a1b-84eb-256f24d312ec","Type":"ContainerDied","Data":"dee8c983aae4ef6924c6d9d77cb8b52f55a1c21e202b60331cafd82b2208a0d0"} Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.942829 4751 scope.go:117] "RemoveContainer" containerID="594cfabac973f7db3981d05b5c834beff59d6ed05b3a70c8219822cdac4213e7" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.943303 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:43:04 crc kubenswrapper[4751]: I0130 21:43:04.004140 4751 scope.go:117] "RemoveContainer" containerID="6917c598c5eba82a1b463890dd92dd5f9d24bd22527e450c4ff5bef9192d6678" Jan 30 21:43:04 crc kubenswrapper[4751]: I0130 21:43:04.010796 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-z9wt9"] Jan 30 21:43:04 crc kubenswrapper[4751]: I0130 21:43:04.025675 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-z9wt9"] Jan 30 21:43:04 crc kubenswrapper[4751]: I0130 21:43:04.957697 4751 generic.go:334] "Generic (PLEG): container finished" podID="1eb1b0d1-2407-440a-826b-b5158aab8be3" containerID="c9d73d17215eb049095438aaa3593690ae9f3794d74be4caa48b5003e376b0ba" exitCode=0 Jan 30 21:43:04 crc kubenswrapper[4751]: I0130 21:43:04.957740 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" event={"ID":"1eb1b0d1-2407-440a-826b-b5158aab8be3","Type":"ContainerDied","Data":"c9d73d17215eb049095438aaa3593690ae9f3794d74be4caa48b5003e376b0ba"} Jan 30 21:43:06 crc kubenswrapper[4751]: I0130 21:43:06.008152 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="294126cb-98f1-4a1b-84eb-256f24d312ec" path="/var/lib/kubelet/pods/294126cb-98f1-4a1b-84eb-256f24d312ec/volumes" Jan 30 21:43:06 crc kubenswrapper[4751]: I0130 21:43:06.009752 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" event={"ID":"1eb1b0d1-2407-440a-826b-b5158aab8be3","Type":"ContainerStarted","Data":"7b36b4dac611412fc837a5177e36a388bf54508291136f08db1d4dbf270b7d1d"} Jan 30 21:43:06 crc kubenswrapper[4751]: I0130 21:43:06.009791 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:07 crc kubenswrapper[4751]: I0130 21:43:07.019489 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" podStartSLOduration=5.019460075 podStartE2EDuration="5.019460075s" podCreationTimestamp="2026-01-30 21:43:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:43:06.014701499 +0000 UTC m=+1724.760524158" watchObservedRunningTime="2026-01-30 21:43:07.019460075 +0000 UTC m=+1725.765282764" Jan 30 21:43:08 crc kubenswrapper[4751]: I0130 21:43:08.030984 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-xw5xf" event={"ID":"27b928b3-101e-4649-ae57-9857145062f0","Type":"ContainerStarted","Data":"74af78e3c804e6dbf95f30a1b6c4ba765fc8edb69bfb20dd7f1176259283a952"} Jan 30 21:43:08 crc kubenswrapper[4751]: I0130 21:43:08.066885 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-xw5xf" podStartSLOduration=1.875446451 podStartE2EDuration="43.066863823s" podCreationTimestamp="2026-01-30 21:42:25 +0000 UTC" firstStartedPulling="2026-01-30 21:42:26.011541638 +0000 UTC m=+1684.757364287" lastFinishedPulling="2026-01-30 21:43:07.20295901 +0000 UTC m=+1725.948781659" observedRunningTime="2026-01-30 21:43:08.048488696 +0000 UTC m=+1726.794311395" watchObservedRunningTime="2026-01-30 21:43:08.066863823 +0000 UTC m=+1726.812686482" Jan 30 21:43:08 crc kubenswrapper[4751]: I0130 21:43:08.327560 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" podUID="294126cb-98f1-4a1b-84eb-256f24d312ec" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.7:5353: i/o timeout" Jan 30 21:43:10 crc kubenswrapper[4751]: I0130 21:43:10.068062 4751 generic.go:334] "Generic (PLEG): container finished" podID="27b928b3-101e-4649-ae57-9857145062f0" containerID="74af78e3c804e6dbf95f30a1b6c4ba765fc8edb69bfb20dd7f1176259283a952" exitCode=0 Jan 30 21:43:10 crc kubenswrapper[4751]: I0130 21:43:10.068148 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-xw5xf" event={"ID":"27b928b3-101e-4649-ae57-9857145062f0","Type":"ContainerDied","Data":"74af78e3c804e6dbf95f30a1b6c4ba765fc8edb69bfb20dd7f1176259283a952"} Jan 30 21:43:11 crc kubenswrapper[4751]: I0130 21:43:11.590868 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-xw5xf" Jan 30 21:43:11 crc kubenswrapper[4751]: I0130 21:43:11.718933 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfv8d\" (UniqueName: \"kubernetes.io/projected/27b928b3-101e-4649-ae57-9857145062f0-kube-api-access-kfv8d\") pod \"27b928b3-101e-4649-ae57-9857145062f0\" (UID: \"27b928b3-101e-4649-ae57-9857145062f0\") " Jan 30 21:43:11 crc kubenswrapper[4751]: I0130 21:43:11.719034 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b928b3-101e-4649-ae57-9857145062f0-combined-ca-bundle\") pod \"27b928b3-101e-4649-ae57-9857145062f0\" (UID: \"27b928b3-101e-4649-ae57-9857145062f0\") " Jan 30 21:43:11 crc kubenswrapper[4751]: I0130 21:43:11.719461 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b928b3-101e-4649-ae57-9857145062f0-config-data\") pod \"27b928b3-101e-4649-ae57-9857145062f0\" (UID: \"27b928b3-101e-4649-ae57-9857145062f0\") " Jan 30 21:43:11 crc kubenswrapper[4751]: I0130 21:43:11.725784 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27b928b3-101e-4649-ae57-9857145062f0-kube-api-access-kfv8d" (OuterVolumeSpecName: "kube-api-access-kfv8d") pod "27b928b3-101e-4649-ae57-9857145062f0" (UID: "27b928b3-101e-4649-ae57-9857145062f0"). InnerVolumeSpecName "kube-api-access-kfv8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:11 crc kubenswrapper[4751]: I0130 21:43:11.768276 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27b928b3-101e-4649-ae57-9857145062f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27b928b3-101e-4649-ae57-9857145062f0" (UID: "27b928b3-101e-4649-ae57-9857145062f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:11 crc kubenswrapper[4751]: I0130 21:43:11.819005 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27b928b3-101e-4649-ae57-9857145062f0-config-data" (OuterVolumeSpecName: "config-data") pod "27b928b3-101e-4649-ae57-9857145062f0" (UID: "27b928b3-101e-4649-ae57-9857145062f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:11 crc kubenswrapper[4751]: I0130 21:43:11.823362 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfv8d\" (UniqueName: \"kubernetes.io/projected/27b928b3-101e-4649-ae57-9857145062f0-kube-api-access-kfv8d\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:11 crc kubenswrapper[4751]: I0130 21:43:11.823416 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b928b3-101e-4649-ae57-9857145062f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:11 crc kubenswrapper[4751]: I0130 21:43:11.823437 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b928b3-101e-4649-ae57-9857145062f0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:12 crc kubenswrapper[4751]: I0130 21:43:12.020468 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 21:43:12 crc kubenswrapper[4751]: I0130 21:43:12.094947 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-xw5xf" event={"ID":"27b928b3-101e-4649-ae57-9857145062f0","Type":"ContainerDied","Data":"404e00df247a8af350a6ca415370390444a1c5afadb4e6e6be032527b135bac6"} Jan 30 21:43:12 crc kubenswrapper[4751]: I0130 21:43:12.095628 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="404e00df247a8af350a6ca415370390444a1c5afadb4e6e6be032527b135bac6" Jan 30 21:43:12 crc kubenswrapper[4751]: I0130 21:43:12.095247 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-xw5xf" Jan 30 21:43:12 crc kubenswrapper[4751]: I0130 21:43:12.976570 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:43:12 crc kubenswrapper[4751]: E0130 21:43:12.977097 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.051580 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7ccc7fc744-trd9b"] Jan 30 21:43:13 crc kubenswrapper[4751]: E0130 21:43:13.052259 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="294126cb-98f1-4a1b-84eb-256f24d312ec" containerName="dnsmasq-dns" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.052278 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="294126cb-98f1-4a1b-84eb-256f24d312ec" containerName="dnsmasq-dns" Jan 30 21:43:13 crc kubenswrapper[4751]: E0130 21:43:13.052297 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27b928b3-101e-4649-ae57-9857145062f0" containerName="heat-db-sync" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.052305 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="27b928b3-101e-4649-ae57-9857145062f0" containerName="heat-db-sync" Jan 30 21:43:13 crc kubenswrapper[4751]: E0130 21:43:13.052318 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="294126cb-98f1-4a1b-84eb-256f24d312ec" containerName="init" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.052344 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="294126cb-98f1-4a1b-84eb-256f24d312ec" containerName="init" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.052666 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="294126cb-98f1-4a1b-84eb-256f24d312ec" containerName="dnsmasq-dns" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.052687 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="27b928b3-101e-4649-ae57-9857145062f0" containerName="heat-db-sync" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.053727 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.067930 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7ccc7fc744-trd9b"] Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.073183 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.152211 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c69dc070-7de6-4681-a44b-6e2007a7f671","Type":"ContainerStarted","Data":"c982ce18940dc550b209265bbc512e136723991b7c02b5ff8f903d2c79cb1d9d"} Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.156015 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2465732f-6109-4d66-84c4-f08a6a1ac472-config-data-custom\") pod \"heat-engine-7ccc7fc744-trd9b\" (UID: \"2465732f-6109-4d66-84c4-f08a6a1ac472\") " pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.156133 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2465732f-6109-4d66-84c4-f08a6a1ac472-config-data\") pod \"heat-engine-7ccc7fc744-trd9b\" (UID: \"2465732f-6109-4d66-84c4-f08a6a1ac472\") " pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.156267 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2dpf\" (UniqueName: \"kubernetes.io/projected/2465732f-6109-4d66-84c4-f08a6a1ac472-kube-api-access-v2dpf\") pod \"heat-engine-7ccc7fc744-trd9b\" (UID: \"2465732f-6109-4d66-84c4-f08a6a1ac472\") " pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.156646 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2465732f-6109-4d66-84c4-f08a6a1ac472-combined-ca-bundle\") pod \"heat-engine-7ccc7fc744-trd9b\" (UID: \"2465732f-6109-4d66-84c4-f08a6a1ac472\") " pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.191021 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-68c4b8fdd-wvfwg"] Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.193020 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.246710 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-68c4b8fdd-wvfwg"] Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.260282 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2465732f-6109-4d66-84c4-f08a6a1ac472-combined-ca-bundle\") pod \"heat-engine-7ccc7fc744-trd9b\" (UID: \"2465732f-6109-4d66-84c4-f08a6a1ac472\") " pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.260404 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2465732f-6109-4d66-84c4-f08a6a1ac472-config-data-custom\") pod \"heat-engine-7ccc7fc744-trd9b\" (UID: \"2465732f-6109-4d66-84c4-f08a6a1ac472\") " pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.260524 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2465732f-6109-4d66-84c4-f08a6a1ac472-config-data\") pod \"heat-engine-7ccc7fc744-trd9b\" (UID: \"2465732f-6109-4d66-84c4-f08a6a1ac472\") " pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.260712 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2dpf\" (UniqueName: \"kubernetes.io/projected/2465732f-6109-4d66-84c4-f08a6a1ac472-kube-api-access-v2dpf\") pod \"heat-engine-7ccc7fc744-trd9b\" (UID: \"2465732f-6109-4d66-84c4-f08a6a1ac472\") " pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.270674 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2465732f-6109-4d66-84c4-f08a6a1ac472-config-data\") pod \"heat-engine-7ccc7fc744-trd9b\" (UID: \"2465732f-6109-4d66-84c4-f08a6a1ac472\") " pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.271977 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2465732f-6109-4d66-84c4-f08a6a1ac472-combined-ca-bundle\") pod \"heat-engine-7ccc7fc744-trd9b\" (UID: \"2465732f-6109-4d66-84c4-f08a6a1ac472\") " pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.275268 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2465732f-6109-4d66-84c4-f08a6a1ac472-config-data-custom\") pod \"heat-engine-7ccc7fc744-trd9b\" (UID: \"2465732f-6109-4d66-84c4-f08a6a1ac472\") " pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.297978 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2dpf\" (UniqueName: \"kubernetes.io/projected/2465732f-6109-4d66-84c4-f08a6a1ac472-kube-api-access-v2dpf\") pod \"heat-engine-7ccc7fc744-trd9b\" (UID: \"2465732f-6109-4d66-84c4-f08a6a1ac472\") " pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.312804 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-wkszw"] Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.313039 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" podUID="56ea54f1-23d8-4e09-b159-bd66a7bb5618" containerName="dnsmasq-dns" containerID="cri-o://9bfea387009414d29fd4b757ae7280f94fc32d9703600af8a5aea204764e5f2d" gracePeriod=10 Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.324565 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-75666c8dc5-6rmsl"] Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.326620 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.332493 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.671070304 podStartE2EDuration="43.332476301s" podCreationTimestamp="2026-01-30 21:42:30 +0000 UTC" firstStartedPulling="2026-01-30 21:42:31.51784579 +0000 UTC m=+1690.263668429" lastFinishedPulling="2026-01-30 21:43:12.179251777 +0000 UTC m=+1730.925074426" observedRunningTime="2026-01-30 21:43:13.194407663 +0000 UTC m=+1731.940230312" watchObservedRunningTime="2026-01-30 21:43:13.332476301 +0000 UTC m=+1732.078298950" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.362304 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-combined-ca-bundle\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.362391 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-internal-tls-certs\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.362498 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-public-tls-certs\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.362878 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-config-data-custom\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.362910 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-config-data\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.363015 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfspw\" (UniqueName: \"kubernetes.io/projected/ce637680-0e89-4089-bbb7-704117a5dcb0-kube-api-access-bfspw\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.364831 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-75666c8dc5-6rmsl"] Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.401498 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.466491 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg884\" (UniqueName: \"kubernetes.io/projected/3100f81b-465d-42f8-9bbd-88e0aecbdc56-kube-api-access-vg884\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.467097 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-combined-ca-bundle\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.467184 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-config-data-custom\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.467282 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-config-data\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.468231 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfspw\" (UniqueName: \"kubernetes.io/projected/ce637680-0e89-4089-bbb7-704117a5dcb0-kube-api-access-bfspw\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.468403 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-public-tls-certs\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.468663 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-config-data\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.468694 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-config-data-custom\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.468724 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-combined-ca-bundle\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.468792 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-internal-tls-certs\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.468954 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-public-tls-certs\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.468972 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-internal-tls-certs\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.473701 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-config-data\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.475267 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-internal-tls-certs\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.476128 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-config-data-custom\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.477909 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-public-tls-certs\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.480025 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-combined-ca-bundle\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.493093 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfspw\" (UniqueName: \"kubernetes.io/projected/ce637680-0e89-4089-bbb7-704117a5dcb0-kube-api-access-bfspw\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.544028 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.571158 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-internal-tls-certs\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.571257 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg884\" (UniqueName: \"kubernetes.io/projected/3100f81b-465d-42f8-9bbd-88e0aecbdc56-kube-api-access-vg884\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.571308 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-combined-ca-bundle\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.571397 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-public-tls-certs\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.571423 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-config-data\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.571457 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-config-data-custom\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.578544 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-config-data\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.579121 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-public-tls-certs\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.579201 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-config-data-custom\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.580501 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-internal-tls-certs\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.580969 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-combined-ca-bundle\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.620097 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg884\" (UniqueName: \"kubernetes.io/projected/3100f81b-465d-42f8-9bbd-88e0aecbdc56-kube-api-access-vg884\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.751146 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:14 crc kubenswrapper[4751]: W0130 21:43:14.029852 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2465732f_6109_4d66_84c4_f08a6a1ac472.slice/crio-e782eb7df9abe611c23afd540f18ce0fde89c4bc457e46705177a9e1553a84ad WatchSource:0}: Error finding container e782eb7df9abe611c23afd540f18ce0fde89c4bc457e46705177a9e1553a84ad: Status 404 returned error can't find the container with id e782eb7df9abe611c23afd540f18ce0fde89c4bc457e46705177a9e1553a84ad Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.038533 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7ccc7fc744-trd9b"] Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.101770 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.176859 4751 generic.go:334] "Generic (PLEG): container finished" podID="56ea54f1-23d8-4e09-b159-bd66a7bb5618" containerID="9bfea387009414d29fd4b757ae7280f94fc32d9703600af8a5aea204764e5f2d" exitCode=0 Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.177186 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.178499 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" event={"ID":"56ea54f1-23d8-4e09-b159-bd66a7bb5618","Type":"ContainerDied","Data":"9bfea387009414d29fd4b757ae7280f94fc32d9703600af8a5aea204764e5f2d"} Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.178567 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" event={"ID":"56ea54f1-23d8-4e09-b159-bd66a7bb5618","Type":"ContainerDied","Data":"f71d247a34202ab7d446d615619ab6a42ca31575f3e6d6363567457b8f0020dc"} Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.178595 4751 scope.go:117] "RemoveContainer" containerID="9bfea387009414d29fd4b757ae7280f94fc32d9703600af8a5aea204764e5f2d" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.182163 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7ccc7fc744-trd9b" event={"ID":"2465732f-6109-4d66-84c4-f08a6a1ac472","Type":"ContainerStarted","Data":"e782eb7df9abe611c23afd540f18ce0fde89c4bc457e46705177a9e1553a84ad"} Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.218751 4751 scope.go:117] "RemoveContainer" containerID="dc4c2bdf34d49b061d056abe71a3bb430c604f70769630df83fbe239747a1c68" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.253154 4751 scope.go:117] "RemoveContainer" containerID="9bfea387009414d29fd4b757ae7280f94fc32d9703600af8a5aea204764e5f2d" Jan 30 21:43:14 crc kubenswrapper[4751]: E0130 21:43:14.253612 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bfea387009414d29fd4b757ae7280f94fc32d9703600af8a5aea204764e5f2d\": container with ID starting with 9bfea387009414d29fd4b757ae7280f94fc32d9703600af8a5aea204764e5f2d not found: ID does not exist" containerID="9bfea387009414d29fd4b757ae7280f94fc32d9703600af8a5aea204764e5f2d" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.253693 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bfea387009414d29fd4b757ae7280f94fc32d9703600af8a5aea204764e5f2d"} err="failed to get container status \"9bfea387009414d29fd4b757ae7280f94fc32d9703600af8a5aea204764e5f2d\": rpc error: code = NotFound desc = could not find container \"9bfea387009414d29fd4b757ae7280f94fc32d9703600af8a5aea204764e5f2d\": container with ID starting with 9bfea387009414d29fd4b757ae7280f94fc32d9703600af8a5aea204764e5f2d not found: ID does not exist" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.253738 4751 scope.go:117] "RemoveContainer" containerID="dc4c2bdf34d49b061d056abe71a3bb430c604f70769630df83fbe239747a1c68" Jan 30 21:43:14 crc kubenswrapper[4751]: E0130 21:43:14.256735 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc4c2bdf34d49b061d056abe71a3bb430c604f70769630df83fbe239747a1c68\": container with ID starting with dc4c2bdf34d49b061d056abe71a3bb430c604f70769630df83fbe239747a1c68 not found: ID does not exist" containerID="dc4c2bdf34d49b061d056abe71a3bb430c604f70769630df83fbe239747a1c68" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.258141 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc4c2bdf34d49b061d056abe71a3bb430c604f70769630df83fbe239747a1c68"} err="failed to get container status \"dc4c2bdf34d49b061d056abe71a3bb430c604f70769630df83fbe239747a1c68\": rpc error: code = NotFound desc = could not find container \"dc4c2bdf34d49b061d056abe71a3bb430c604f70769630df83fbe239747a1c68\": container with ID starting with dc4c2bdf34d49b061d056abe71a3bb430c604f70769630df83fbe239747a1c68 not found: ID does not exist" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.271553 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-68c4b8fdd-wvfwg"] Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.302282 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-openstack-edpm-ipam\") pod \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.302401 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-config\") pod \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.302427 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-dns-svc\") pod \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.302462 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-dns-swift-storage-0\") pod \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.302487 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkx89\" (UniqueName: \"kubernetes.io/projected/56ea54f1-23d8-4e09-b159-bd66a7bb5618-kube-api-access-zkx89\") pod \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.302539 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-ovsdbserver-sb\") pod \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.302790 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-ovsdbserver-nb\") pod \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.310100 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56ea54f1-23d8-4e09-b159-bd66a7bb5618-kube-api-access-zkx89" (OuterVolumeSpecName: "kube-api-access-zkx89") pod "56ea54f1-23d8-4e09-b159-bd66a7bb5618" (UID: "56ea54f1-23d8-4e09-b159-bd66a7bb5618"). InnerVolumeSpecName "kube-api-access-zkx89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.376144 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "56ea54f1-23d8-4e09-b159-bd66a7bb5618" (UID: "56ea54f1-23d8-4e09-b159-bd66a7bb5618"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.376171 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-config" (OuterVolumeSpecName: "config") pod "56ea54f1-23d8-4e09-b159-bd66a7bb5618" (UID: "56ea54f1-23d8-4e09-b159-bd66a7bb5618"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.384965 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "56ea54f1-23d8-4e09-b159-bd66a7bb5618" (UID: "56ea54f1-23d8-4e09-b159-bd66a7bb5618"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.394027 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "56ea54f1-23d8-4e09-b159-bd66a7bb5618" (UID: "56ea54f1-23d8-4e09-b159-bd66a7bb5618"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.405908 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.405938 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.405947 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.405955 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkx89\" (UniqueName: \"kubernetes.io/projected/56ea54f1-23d8-4e09-b159-bd66a7bb5618-kube-api-access-zkx89\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.405965 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.406386 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "56ea54f1-23d8-4e09-b159-bd66a7bb5618" (UID: "56ea54f1-23d8-4e09-b159-bd66a7bb5618"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.411757 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56ea54f1-23d8-4e09-b159-bd66a7bb5618" (UID: "56ea54f1-23d8-4e09-b159-bd66a7bb5618"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.459146 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-75666c8dc5-6rmsl"] Jan 30 21:43:14 crc kubenswrapper[4751]: W0130 21:43:14.464503 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3100f81b_465d_42f8_9bbd_88e0aecbdc56.slice/crio-30520be9b041eff015abd949e64a4295c7efe4ebaff533e7aa05a8a0dcdc677d WatchSource:0}: Error finding container 30520be9b041eff015abd949e64a4295c7efe4ebaff533e7aa05a8a0dcdc677d: Status 404 returned error can't find the container with id 30520be9b041eff015abd949e64a4295c7efe4ebaff533e7aa05a8a0dcdc677d Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.511416 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.511447 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.523834 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-wkszw"] Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.535694 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-wkszw"] Jan 30 21:43:15 crc kubenswrapper[4751]: I0130 21:43:15.194556 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-68c4b8fdd-wvfwg" event={"ID":"ce637680-0e89-4089-bbb7-704117a5dcb0","Type":"ContainerStarted","Data":"ad24f26d8777d0ba1cfb276ba9cd17f4029eaa9d380eb07c9d996991575f8570"} Jan 30 21:43:15 crc kubenswrapper[4751]: I0130 21:43:15.199228 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" event={"ID":"3100f81b-465d-42f8-9bbd-88e0aecbdc56","Type":"ContainerStarted","Data":"30520be9b041eff015abd949e64a4295c7efe4ebaff533e7aa05a8a0dcdc677d"} Jan 30 21:43:15 crc kubenswrapper[4751]: I0130 21:43:15.200766 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7ccc7fc744-trd9b" event={"ID":"2465732f-6109-4d66-84c4-f08a6a1ac472","Type":"ContainerStarted","Data":"658d9824ceb6afbb917ed8d0673d3d53a1c934b4964073e122686bd5ab6e0145"} Jan 30 21:43:15 crc kubenswrapper[4751]: I0130 21:43:15.200955 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:15 crc kubenswrapper[4751]: I0130 21:43:15.225851 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7ccc7fc744-trd9b" podStartSLOduration=2.225835206 podStartE2EDuration="2.225835206s" podCreationTimestamp="2026-01-30 21:43:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:43:15.218606942 +0000 UTC m=+1733.964429581" watchObservedRunningTime="2026-01-30 21:43:15.225835206 +0000 UTC m=+1733.971657855" Jan 30 21:43:15 crc kubenswrapper[4751]: I0130 21:43:15.994261 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56ea54f1-23d8-4e09-b159-bd66a7bb5618" path="/var/lib/kubelet/pods/56ea54f1-23d8-4e09-b159-bd66a7bb5618/volumes" Jan 30 21:43:17 crc kubenswrapper[4751]: I0130 21:43:17.240858 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" event={"ID":"3100f81b-465d-42f8-9bbd-88e0aecbdc56","Type":"ContainerStarted","Data":"8df6ee0b57ace7352f8521c0d1e8d0a4985bc8826b702b26426bdf59bee4a39c"} Jan 30 21:43:17 crc kubenswrapper[4751]: I0130 21:43:17.241511 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:17 crc kubenswrapper[4751]: I0130 21:43:17.243543 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-68c4b8fdd-wvfwg" event={"ID":"ce637680-0e89-4089-bbb7-704117a5dcb0","Type":"ContainerStarted","Data":"3582a9eb606a7b66235da50ed492f0932575cf3a5f51474ebfad897ddaba2434"} Jan 30 21:43:17 crc kubenswrapper[4751]: I0130 21:43:17.255803 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:17 crc kubenswrapper[4751]: I0130 21:43:17.275664 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" podStartSLOduration=2.678256541 podStartE2EDuration="4.275644456s" podCreationTimestamp="2026-01-30 21:43:13 +0000 UTC" firstStartedPulling="2026-01-30 21:43:14.469732084 +0000 UTC m=+1733.215554733" lastFinishedPulling="2026-01-30 21:43:16.067119999 +0000 UTC m=+1734.812942648" observedRunningTime="2026-01-30 21:43:17.263522929 +0000 UTC m=+1736.009345618" watchObservedRunningTime="2026-01-30 21:43:17.275644456 +0000 UTC m=+1736.021467105" Jan 30 21:43:17 crc kubenswrapper[4751]: I0130 21:43:17.287227 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-68c4b8fdd-wvfwg" podStartSLOduration=2.482500416 podStartE2EDuration="4.287205848s" podCreationTimestamp="2026-01-30 21:43:13 +0000 UTC" firstStartedPulling="2026-01-30 21:43:14.267237947 +0000 UTC m=+1733.013060596" lastFinishedPulling="2026-01-30 21:43:16.071943369 +0000 UTC m=+1734.817766028" observedRunningTime="2026-01-30 21:43:17.282321337 +0000 UTC m=+1736.028143986" watchObservedRunningTime="2026-01-30 21:43:17.287205848 +0000 UTC m=+1736.033028487" Jan 30 21:43:23 crc kubenswrapper[4751]: I0130 21:43:23.076254 4751 scope.go:117] "RemoveContainer" containerID="f5e405ed39cb57c7e634de9365462e74ee99a3051cc26eb21d0da11ce6b70e82" Jan 30 21:43:23 crc kubenswrapper[4751]: I0130 21:43:23.131896 4751 scope.go:117] "RemoveContainer" containerID="8ab084e559e8069a5cdd46d2514468a22129fd354769c2604ada982fbc95ae13" Jan 30 21:43:23 crc kubenswrapper[4751]: I0130 21:43:23.174719 4751 scope.go:117] "RemoveContainer" containerID="ded685defb3526390eca5f7cb2d53cfb12497b060a9cc1ce297a52cc7244f151" Jan 30 21:43:23 crc kubenswrapper[4751]: I0130 21:43:23.228429 4751 scope.go:117] "RemoveContainer" containerID="8694789fa0038f6976a755ccc1f09ff5edec94cba32aab400030d4cae96b540d" Jan 30 21:43:23 crc kubenswrapper[4751]: I0130 21:43:23.274741 4751 scope.go:117] "RemoveContainer" containerID="6cd06f2bb56b148e8bf2fd2524c5d527d970ea6c6b7ba394cc56edcda374faf1" Jan 30 21:43:23 crc kubenswrapper[4751]: I0130 21:43:23.307185 4751 scope.go:117] "RemoveContainer" containerID="6a2c138626ec1f6b7d91772998275ab4f054944271024ad8876c0420d7d4bbc9" Jan 30 21:43:25 crc kubenswrapper[4751]: I0130 21:43:25.628088 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:25 crc kubenswrapper[4751]: I0130 21:43:25.650454 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:25 crc kubenswrapper[4751]: I0130 21:43:25.725425 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6f4bd4b69-ntk8n"] Jan 30 21:43:25 crc kubenswrapper[4751]: I0130 21:43:25.725978 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6f4bd4b69-ntk8n" podUID="43d36aef-fb14-4701-8931-9aaa96d049a9" containerName="heat-api" containerID="cri-o://f4a9281bbfdd290c0c4cad13b45f8bae7e6a12cff0d866a3bc02118e3db003a9" gracePeriod=60 Jan 30 21:43:25 crc kubenswrapper[4751]: I0130 21:43:25.739233 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-d6c877d68-9ktwv"] Jan 30 21:43:25 crc kubenswrapper[4751]: I0130 21:43:25.739584 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-d6c877d68-9ktwv" podUID="8a808a38-f939-4b4f-8386-e177712737d6" containerName="heat-cfnapi" containerID="cri-o://702678d6125a2ef911b38b5fcb8c725d8c871b8257728962b6a494f07ee762d0" gracePeriod=60 Jan 30 21:43:25 crc kubenswrapper[4751]: I0130 21:43:25.976607 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:43:25 crc kubenswrapper[4751]: E0130 21:43:25.976953 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:43:28 crc kubenswrapper[4751]: I0130 21:43:28.891440 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6f4bd4b69-ntk8n" podUID="43d36aef-fb14-4701-8931-9aaa96d049a9" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.237:8004/healthcheck\": read tcp 10.217.0.2:44998->10.217.0.237:8004: read: connection reset by peer" Jan 30 21:43:28 crc kubenswrapper[4751]: I0130 21:43:28.921018 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-d6c877d68-9ktwv" podUID="8a808a38-f939-4b4f-8386-e177712737d6" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.236:8000/healthcheck\": read tcp 10.217.0.2:39156->10.217.0.236:8000: read: connection reset by peer" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.412849 4751 generic.go:334] "Generic (PLEG): container finished" podID="8a808a38-f939-4b4f-8386-e177712737d6" containerID="702678d6125a2ef911b38b5fcb8c725d8c871b8257728962b6a494f07ee762d0" exitCode=0 Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.412933 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d6c877d68-9ktwv" event={"ID":"8a808a38-f939-4b4f-8386-e177712737d6","Type":"ContainerDied","Data":"702678d6125a2ef911b38b5fcb8c725d8c871b8257728962b6a494f07ee762d0"} Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.414207 4751 generic.go:334] "Generic (PLEG): container finished" podID="29afad92-51c9-45a8-a6a0-ed64925f91f3" containerID="a703bc2df41831a8f47a2fe3701f155f23aa568e85a94cdaaa0863e49f204a11" exitCode=0 Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.414250 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"29afad92-51c9-45a8-a6a0-ed64925f91f3","Type":"ContainerDied","Data":"a703bc2df41831a8f47a2fe3701f155f23aa568e85a94cdaaa0863e49f204a11"} Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.419578 4751 generic.go:334] "Generic (PLEG): container finished" podID="43d36aef-fb14-4701-8931-9aaa96d049a9" containerID="f4a9281bbfdd290c0c4cad13b45f8bae7e6a12cff0d866a3bc02118e3db003a9" exitCode=0 Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.419663 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f4bd4b69-ntk8n" event={"ID":"43d36aef-fb14-4701-8931-9aaa96d049a9","Type":"ContainerDied","Data":"f4a9281bbfdd290c0c4cad13b45f8bae7e6a12cff0d866a3bc02118e3db003a9"} Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.436168 4751 generic.go:334] "Generic (PLEG): container finished" podID="aa019efa-4067-4bd5-b370-12f6a4e6b856" containerID="b6f44bbe8cb9612ed9ca700c8e80a7500e74cb71a824141436a42322c6f2c1ec" exitCode=0 Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.436237 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"aa019efa-4067-4bd5-b370-12f6a4e6b856","Type":"ContainerDied","Data":"b6f44bbe8cb9612ed9ca700c8e80a7500e74cb71a824141436a42322c6f2c1ec"} Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.698992 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.706728 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.831460 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-internal-tls-certs\") pod \"43d36aef-fb14-4701-8931-9aaa96d049a9\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.831796 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-config-data\") pod \"43d36aef-fb14-4701-8931-9aaa96d049a9\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.831890 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjzpc\" (UniqueName: \"kubernetes.io/projected/8a808a38-f939-4b4f-8386-e177712737d6-kube-api-access-wjzpc\") pod \"8a808a38-f939-4b4f-8386-e177712737d6\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.831957 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7dxt\" (UniqueName: \"kubernetes.io/projected/43d36aef-fb14-4701-8931-9aaa96d049a9-kube-api-access-n7dxt\") pod \"43d36aef-fb14-4701-8931-9aaa96d049a9\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.832071 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-public-tls-certs\") pod \"8a808a38-f939-4b4f-8386-e177712737d6\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.832158 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-combined-ca-bundle\") pod \"43d36aef-fb14-4701-8931-9aaa96d049a9\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.832234 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-config-data-custom\") pod \"43d36aef-fb14-4701-8931-9aaa96d049a9\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.832288 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-combined-ca-bundle\") pod \"8a808a38-f939-4b4f-8386-e177712737d6\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.832366 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-internal-tls-certs\") pod \"8a808a38-f939-4b4f-8386-e177712737d6\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.832449 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-public-tls-certs\") pod \"43d36aef-fb14-4701-8931-9aaa96d049a9\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.832488 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-config-data\") pod \"8a808a38-f939-4b4f-8386-e177712737d6\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.832522 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-config-data-custom\") pod \"8a808a38-f939-4b4f-8386-e177712737d6\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.838905 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a808a38-f939-4b4f-8386-e177712737d6-kube-api-access-wjzpc" (OuterVolumeSpecName: "kube-api-access-wjzpc") pod "8a808a38-f939-4b4f-8386-e177712737d6" (UID: "8a808a38-f939-4b4f-8386-e177712737d6"). InnerVolumeSpecName "kube-api-access-wjzpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.838953 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8a808a38-f939-4b4f-8386-e177712737d6" (UID: "8a808a38-f939-4b4f-8386-e177712737d6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.845135 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43d36aef-fb14-4701-8931-9aaa96d049a9-kube-api-access-n7dxt" (OuterVolumeSpecName: "kube-api-access-n7dxt") pod "43d36aef-fb14-4701-8931-9aaa96d049a9" (UID: "43d36aef-fb14-4701-8931-9aaa96d049a9"). InnerVolumeSpecName "kube-api-access-n7dxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.857534 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "43d36aef-fb14-4701-8931-9aaa96d049a9" (UID: "43d36aef-fb14-4701-8931-9aaa96d049a9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.883513 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43d36aef-fb14-4701-8931-9aaa96d049a9" (UID: "43d36aef-fb14-4701-8931-9aaa96d049a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.898459 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a808a38-f939-4b4f-8386-e177712737d6" (UID: "8a808a38-f939-4b4f-8386-e177712737d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.925893 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "43d36aef-fb14-4701-8931-9aaa96d049a9" (UID: "43d36aef-fb14-4701-8931-9aaa96d049a9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.932019 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "43d36aef-fb14-4701-8931-9aaa96d049a9" (UID: "43d36aef-fb14-4701-8931-9aaa96d049a9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.935292 4751 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.935317 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjzpc\" (UniqueName: \"kubernetes.io/projected/8a808a38-f939-4b4f-8386-e177712737d6-kube-api-access-wjzpc\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.935340 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7dxt\" (UniqueName: \"kubernetes.io/projected/43d36aef-fb14-4701-8931-9aaa96d049a9-kube-api-access-n7dxt\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.935349 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.935357 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.935365 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.935373 4751 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.935381 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.962875 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-config-data" (OuterVolumeSpecName: "config-data") pod "43d36aef-fb14-4701-8931-9aaa96d049a9" (UID: "43d36aef-fb14-4701-8931-9aaa96d049a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.965203 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8a808a38-f939-4b4f-8386-e177712737d6" (UID: "8a808a38-f939-4b4f-8386-e177712737d6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.966801 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8a808a38-f939-4b4f-8386-e177712737d6" (UID: "8a808a38-f939-4b4f-8386-e177712737d6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.971569 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-config-data" (OuterVolumeSpecName: "config-data") pod "8a808a38-f939-4b4f-8386-e177712737d6" (UID: "8a808a38-f939-4b4f-8386-e177712737d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.038243 4751 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.038798 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.038870 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.039233 4751 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.448656 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"aa019efa-4067-4bd5-b370-12f6a4e6b856","Type":"ContainerStarted","Data":"99fb433ba9970268f74cec9b102bd6a7711fb0ffe180c9fc936a1a0fbdf5d326"} Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.450022 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.452623 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d6c877d68-9ktwv" event={"ID":"8a808a38-f939-4b4f-8386-e177712737d6","Type":"ContainerDied","Data":"499d3637c3e03f2b7dc0a86e62ae72f328746856d2c5b4b97226255304ddbec8"} Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.452754 4751 scope.go:117] "RemoveContainer" containerID="702678d6125a2ef911b38b5fcb8c725d8c871b8257728962b6a494f07ee762d0" Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.452967 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.457909 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"29afad92-51c9-45a8-a6a0-ed64925f91f3","Type":"ContainerStarted","Data":"827d0f7339fc9e8684aff7263232c5e0fc7867b4f20a549010fe4efd05859871"} Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.458238 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.464258 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f4bd4b69-ntk8n" event={"ID":"43d36aef-fb14-4701-8931-9aaa96d049a9","Type":"ContainerDied","Data":"c0307f0807d895bc4c4c81ee028a2f34849a32fd2400b791f772ab65d779a108"} Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.464383 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.481769 4751 scope.go:117] "RemoveContainer" containerID="f4a9281bbfdd290c0c4cad13b45f8bae7e6a12cff0d866a3bc02118e3db003a9" Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.501559 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=41.50153796 podStartE2EDuration="41.50153796s" podCreationTimestamp="2026-01-30 21:42:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:43:30.482047064 +0000 UTC m=+1749.227869713" watchObservedRunningTime="2026-01-30 21:43:30.50153796 +0000 UTC m=+1749.247360619" Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.534338 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-d6c877d68-9ktwv"] Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.553475 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-d6c877d68-9ktwv"] Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.555578 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=48.555558409 podStartE2EDuration="48.555558409s" podCreationTimestamp="2026-01-30 21:42:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:43:30.536743481 +0000 UTC m=+1749.282566140" watchObservedRunningTime="2026-01-30 21:43:30.555558409 +0000 UTC m=+1749.301381058" Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.579304 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6f4bd4b69-ntk8n"] Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.590800 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6f4bd4b69-ntk8n"] Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.024762 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43d36aef-fb14-4701-8931-9aaa96d049a9" path="/var/lib/kubelet/pods/43d36aef-fb14-4701-8931-9aaa96d049a9/volumes" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.026774 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a808a38-f939-4b4f-8386-e177712737d6" path="/var/lib/kubelet/pods/8a808a38-f939-4b4f-8386-e177712737d6/volumes" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.560952 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w"] Jan 30 21:43:32 crc kubenswrapper[4751]: E0130 21:43:32.561999 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ea54f1-23d8-4e09-b159-bd66a7bb5618" containerName="init" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.562020 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ea54f1-23d8-4e09-b159-bd66a7bb5618" containerName="init" Jan 30 21:43:32 crc kubenswrapper[4751]: E0130 21:43:32.562030 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43d36aef-fb14-4701-8931-9aaa96d049a9" containerName="heat-api" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.562036 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="43d36aef-fb14-4701-8931-9aaa96d049a9" containerName="heat-api" Jan 30 21:43:32 crc kubenswrapper[4751]: E0130 21:43:32.562082 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ea54f1-23d8-4e09-b159-bd66a7bb5618" containerName="dnsmasq-dns" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.562090 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ea54f1-23d8-4e09-b159-bd66a7bb5618" containerName="dnsmasq-dns" Jan 30 21:43:32 crc kubenswrapper[4751]: E0130 21:43:32.562103 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a808a38-f939-4b4f-8386-e177712737d6" containerName="heat-cfnapi" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.562109 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a808a38-f939-4b4f-8386-e177712737d6" containerName="heat-cfnapi" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.562308 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a808a38-f939-4b4f-8386-e177712737d6" containerName="heat-cfnapi" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.562356 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ea54f1-23d8-4e09-b159-bd66a7bb5618" containerName="dnsmasq-dns" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.562379 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="43d36aef-fb14-4701-8931-9aaa96d049a9" containerName="heat-api" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.563170 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.565155 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.565582 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.568653 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.569381 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.584147 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w"] Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.716868 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.716995 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n6xs\" (UniqueName: \"kubernetes.io/projected/37b91419-687f-4907-888d-9344d1e8602a-kube-api-access-5n6xs\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.717109 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.717245 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.819237 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.819389 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.819497 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n6xs\" (UniqueName: \"kubernetes.io/projected/37b91419-687f-4907-888d-9344d1e8602a-kube-api-access-5n6xs\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.819636 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.826078 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.828279 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.835073 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.837060 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n6xs\" (UniqueName: \"kubernetes.io/projected/37b91419-687f-4907-888d-9344d1e8602a-kube-api-access-5n6xs\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.920517 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:33 crc kubenswrapper[4751]: I0130 21:43:33.451082 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:33 crc kubenswrapper[4751]: I0130 21:43:33.564017 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-d9fcd4c7f-gcp2z"] Jan 30 21:43:33 crc kubenswrapper[4751]: I0130 21:43:33.564250 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" podUID="191c5874-d3f0-4a2b-adcf-8ceed228e459" containerName="heat-engine" containerID="cri-o://11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba" gracePeriod=60 Jan 30 21:43:33 crc kubenswrapper[4751]: I0130 21:43:33.886785 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w"] Jan 30 21:43:34 crc kubenswrapper[4751]: I0130 21:43:34.513846 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" event={"ID":"37b91419-687f-4907-888d-9344d1e8602a","Type":"ContainerStarted","Data":"a3e1ec6aefe3e881897f0787f3cc0457ad03a27579caa9bb077b0717aa35bb28"} Jan 30 21:43:35 crc kubenswrapper[4751]: E0130 21:43:35.528524 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 21:43:35 crc kubenswrapper[4751]: E0130 21:43:35.530118 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 21:43:35 crc kubenswrapper[4751]: E0130 21:43:35.531476 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 21:43:35 crc kubenswrapper[4751]: E0130 21:43:35.531516 4751 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" podUID="191c5874-d3f0-4a2b-adcf-8ceed228e459" containerName="heat-engine" Jan 30 21:43:38 crc kubenswrapper[4751]: I0130 21:43:38.039577 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:43:38 crc kubenswrapper[4751]: E0130 21:43:38.041140 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:43:40 crc kubenswrapper[4751]: I0130 21:43:40.114018 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="aa019efa-4067-4bd5-b370-12f6a4e6b856" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.22:5671: connect: connection refused" Jan 30 21:43:43 crc kubenswrapper[4751]: I0130 21:43:43.250733 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 30 21:43:43 crc kubenswrapper[4751]: I0130 21:43:43.362506 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 30 21:43:43 crc kubenswrapper[4751]: I0130 21:43:43.796316 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-dmqw2"] Jan 30 21:43:43 crc kubenswrapper[4751]: I0130 21:43:43.815102 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-dmqw2"] Jan 30 21:43:43 crc kubenswrapper[4751]: I0130 21:43:43.873201 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-q9ws6"] Jan 30 21:43:43 crc kubenswrapper[4751]: I0130 21:43:43.876959 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:43 crc kubenswrapper[4751]: I0130 21:43:43.878891 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 21:43:43 crc kubenswrapper[4751]: I0130 21:43:43.903365 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-q9ws6"] Jan 30 21:43:43 crc kubenswrapper[4751]: I0130 21:43:43.907497 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-combined-ca-bundle\") pod \"aodh-db-sync-q9ws6\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:43 crc kubenswrapper[4751]: I0130 21:43:43.907821 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45dc7\" (UniqueName: \"kubernetes.io/projected/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-kube-api-access-45dc7\") pod \"aodh-db-sync-q9ws6\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:43 crc kubenswrapper[4751]: I0130 21:43:43.907959 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-scripts\") pod \"aodh-db-sync-q9ws6\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:43 crc kubenswrapper[4751]: I0130 21:43:43.908282 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-config-data\") pod \"aodh-db-sync-q9ws6\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:43 crc kubenswrapper[4751]: I0130 21:43:43.991939 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da95a3dd-69cf-4a27-af6c-1ac5b262c00a" path="/var/lib/kubelet/pods/da95a3dd-69cf-4a27-af6c-1ac5b262c00a/volumes" Jan 30 21:43:44 crc kubenswrapper[4751]: I0130 21:43:44.010133 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-config-data\") pod \"aodh-db-sync-q9ws6\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:44 crc kubenswrapper[4751]: I0130 21:43:44.010261 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-combined-ca-bundle\") pod \"aodh-db-sync-q9ws6\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:44 crc kubenswrapper[4751]: I0130 21:43:44.010433 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45dc7\" (UniqueName: \"kubernetes.io/projected/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-kube-api-access-45dc7\") pod \"aodh-db-sync-q9ws6\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:44 crc kubenswrapper[4751]: I0130 21:43:44.010493 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-scripts\") pod \"aodh-db-sync-q9ws6\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:44 crc kubenswrapper[4751]: I0130 21:43:44.018253 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-scripts\") pod \"aodh-db-sync-q9ws6\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:44 crc kubenswrapper[4751]: I0130 21:43:44.018907 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-combined-ca-bundle\") pod \"aodh-db-sync-q9ws6\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:44 crc kubenswrapper[4751]: I0130 21:43:44.021294 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-config-data\") pod \"aodh-db-sync-q9ws6\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:44 crc kubenswrapper[4751]: I0130 21:43:44.069197 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45dc7\" (UniqueName: \"kubernetes.io/projected/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-kube-api-access-45dc7\") pod \"aodh-db-sync-q9ws6\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:44 crc kubenswrapper[4751]: I0130 21:43:44.202536 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:44 crc kubenswrapper[4751]: I0130 21:43:44.675453 4751 generic.go:334] "Generic (PLEG): container finished" podID="191c5874-d3f0-4a2b-adcf-8ceed228e459" containerID="11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba" exitCode=0 Jan 30 21:43:44 crc kubenswrapper[4751]: I0130 21:43:44.675538 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" event={"ID":"191c5874-d3f0-4a2b-adcf-8ceed228e459","Type":"ContainerDied","Data":"11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba"} Jan 30 21:43:45 crc kubenswrapper[4751]: E0130 21:43:45.527884 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba is running failed: container process not found" containerID="11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 21:43:45 crc kubenswrapper[4751]: E0130 21:43:45.528231 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba is running failed: container process not found" containerID="11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 21:43:45 crc kubenswrapper[4751]: E0130 21:43:45.528634 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba is running failed: container process not found" containerID="11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 21:43:45 crc kubenswrapper[4751]: E0130 21:43:45.528686 4751 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" podUID="191c5874-d3f0-4a2b-adcf-8ceed228e459" containerName="heat-engine" Jan 30 21:43:46 crc kubenswrapper[4751]: I0130 21:43:46.970294 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.003741 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m6kn\" (UniqueName: \"kubernetes.io/projected/191c5874-d3f0-4a2b-adcf-8ceed228e459-kube-api-access-5m6kn\") pod \"191c5874-d3f0-4a2b-adcf-8ceed228e459\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.003913 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-config-data\") pod \"191c5874-d3f0-4a2b-adcf-8ceed228e459\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.003957 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-config-data-custom\") pod \"191c5874-d3f0-4a2b-adcf-8ceed228e459\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.004018 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-combined-ca-bundle\") pod \"191c5874-d3f0-4a2b-adcf-8ceed228e459\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.059749 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/191c5874-d3f0-4a2b-adcf-8ceed228e459-kube-api-access-5m6kn" (OuterVolumeSpecName: "kube-api-access-5m6kn") pod "191c5874-d3f0-4a2b-adcf-8ceed228e459" (UID: "191c5874-d3f0-4a2b-adcf-8ceed228e459"). InnerVolumeSpecName "kube-api-access-5m6kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.061233 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "191c5874-d3f0-4a2b-adcf-8ceed228e459" (UID: "191c5874-d3f0-4a2b-adcf-8ceed228e459"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.069037 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "191c5874-d3f0-4a2b-adcf-8ceed228e459" (UID: "191c5874-d3f0-4a2b-adcf-8ceed228e459"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.107627 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m6kn\" (UniqueName: \"kubernetes.io/projected/191c5874-d3f0-4a2b-adcf-8ceed228e459-kube-api-access-5m6kn\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.107665 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.107676 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.177499 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-config-data" (OuterVolumeSpecName: "config-data") pod "191c5874-d3f0-4a2b-adcf-8ceed228e459" (UID: "191c5874-d3f0-4a2b-adcf-8ceed228e459"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.209713 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:47 crc kubenswrapper[4751]: W0130 21:43:47.229983 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee22e47a_e31f_4d01_8eec_e4d24dbb02ca.slice/crio-4e16bcad8b0611f5c448e56540c19d9dd39736dcd1a42341bea3573a92a46e77 WatchSource:0}: Error finding container 4e16bcad8b0611f5c448e56540c19d9dd39736dcd1a42341bea3573a92a46e77: Status 404 returned error can't find the container with id 4e16bcad8b0611f5c448e56540c19d9dd39736dcd1a42341bea3573a92a46e77 Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.232811 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-q9ws6"] Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.723588 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" event={"ID":"191c5874-d3f0-4a2b-adcf-8ceed228e459","Type":"ContainerDied","Data":"1364dfb35f78bd1c1c6c4e97299ac2e166c205513eddfbb1858b9264a7b65646"} Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.723999 4751 scope.go:117] "RemoveContainer" containerID="11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba" Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.724150 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.741343 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" event={"ID":"37b91419-687f-4907-888d-9344d1e8602a","Type":"ContainerStarted","Data":"a3b766c6a4c7e111f675065738bc5eed3b92e29a07eaf7151c0c434f41fa2116"} Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.752646 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-q9ws6" event={"ID":"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca","Type":"ContainerStarted","Data":"4e16bcad8b0611f5c448e56540c19d9dd39736dcd1a42341bea3573a92a46e77"} Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.771108 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" podStartSLOduration=3.123412981 podStartE2EDuration="15.771087015s" podCreationTimestamp="2026-01-30 21:43:32 +0000 UTC" firstStartedPulling="2026-01-30 21:43:33.878005827 +0000 UTC m=+1752.623828476" lastFinishedPulling="2026-01-30 21:43:46.525679861 +0000 UTC m=+1765.271502510" observedRunningTime="2026-01-30 21:43:47.759249255 +0000 UTC m=+1766.505071904" watchObservedRunningTime="2026-01-30 21:43:47.771087015 +0000 UTC m=+1766.516909664" Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.810946 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-d9fcd4c7f-gcp2z"] Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.832819 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-d9fcd4c7f-gcp2z"] Jan 30 21:43:48 crc kubenswrapper[4751]: I0130 21:43:48.032169 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="191c5874-d3f0-4a2b-adcf-8ceed228e459" path="/var/lib/kubelet/pods/191c5874-d3f0-4a2b-adcf-8ceed228e459/volumes" Jan 30 21:43:48 crc kubenswrapper[4751]: I0130 21:43:48.180289 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="2ed6288f-1f28-4189-a452-10ed3fa78c7f" containerName="rabbitmq" containerID="cri-o://c118273bc1b7e17b96ef2802a30e188177f69c364926f8d0532e695e28d4ca05" gracePeriod=604796 Jan 30 21:43:50 crc kubenswrapper[4751]: I0130 21:43:50.113570 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:43:50 crc kubenswrapper[4751]: I0130 21:43:50.404577 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="2ed6288f-1f28-4189-a452-10ed3fa78c7f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.136:5671: connect: connection refused" Jan 30 21:43:52 crc kubenswrapper[4751]: I0130 21:43:52.976000 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:43:52 crc kubenswrapper[4751]: E0130 21:43:52.976996 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:43:54 crc kubenswrapper[4751]: I0130 21:43:54.856123 4751 generic.go:334] "Generic (PLEG): container finished" podID="2ed6288f-1f28-4189-a452-10ed3fa78c7f" containerID="c118273bc1b7e17b96ef2802a30e188177f69c364926f8d0532e695e28d4ca05" exitCode=0 Jan 30 21:43:54 crc kubenswrapper[4751]: I0130 21:43:54.856190 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"2ed6288f-1f28-4189-a452-10ed3fa78c7f","Type":"ContainerDied","Data":"c118273bc1b7e17b96ef2802a30e188177f69c364926f8d0532e695e28d4ca05"} Jan 30 21:43:54 crc kubenswrapper[4751]: I0130 21:43:54.860189 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-q9ws6" event={"ID":"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca","Type":"ContainerStarted","Data":"e68cf53ba13bd45baafd16d7ceca811457154cd522453b22e57f6a2054d3b023"} Jan 30 21:43:54 crc kubenswrapper[4751]: I0130 21:43:54.884042 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-q9ws6" podStartSLOduration=5.295973705 podStartE2EDuration="11.883989085s" podCreationTimestamp="2026-01-30 21:43:43 +0000 UTC" firstStartedPulling="2026-01-30 21:43:47.232938336 +0000 UTC m=+1765.978760985" lastFinishedPulling="2026-01-30 21:43:53.820953716 +0000 UTC m=+1772.566776365" observedRunningTime="2026-01-30 21:43:54.875202118 +0000 UTC m=+1773.621024767" watchObservedRunningTime="2026-01-30 21:43:54.883989085 +0000 UTC m=+1773.629811734" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.298838 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.415607 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-server-conf\") pod \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.415676 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-plugins-conf\") pod \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.415769 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-config-data\") pod \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.416416 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-846ec118-ed9e-4829-80fb-53a6edccba77\") pod \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.416461 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-confd\") pod \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.416504 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-tls\") pod \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.416529 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2ed6288f-1f28-4189-a452-10ed3fa78c7f" (UID: "2ed6288f-1f28-4189-a452-10ed3fa78c7f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.416545 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-plugins\") pod \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.416624 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ed6288f-1f28-4189-a452-10ed3fa78c7f-erlang-cookie-secret\") pod \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.416678 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ed6288f-1f28-4189-a452-10ed3fa78c7f-pod-info\") pod \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.416758 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqvcx\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-kube-api-access-zqvcx\") pod \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.416846 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-erlang-cookie\") pod \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.416880 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2ed6288f-1f28-4189-a452-10ed3fa78c7f" (UID: "2ed6288f-1f28-4189-a452-10ed3fa78c7f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.417803 4751 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.417822 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.418159 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2ed6288f-1f28-4189-a452-10ed3fa78c7f" (UID: "2ed6288f-1f28-4189-a452-10ed3fa78c7f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.432687 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "2ed6288f-1f28-4189-a452-10ed3fa78c7f" (UID: "2ed6288f-1f28-4189-a452-10ed3fa78c7f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.437471 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2ed6288f-1f28-4189-a452-10ed3fa78c7f-pod-info" (OuterVolumeSpecName: "pod-info") pod "2ed6288f-1f28-4189-a452-10ed3fa78c7f" (UID: "2ed6288f-1f28-4189-a452-10ed3fa78c7f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.440612 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ed6288f-1f28-4189-a452-10ed3fa78c7f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2ed6288f-1f28-4189-a452-10ed3fa78c7f" (UID: "2ed6288f-1f28-4189-a452-10ed3fa78c7f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.454281 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-kube-api-access-zqvcx" (OuterVolumeSpecName: "kube-api-access-zqvcx") pod "2ed6288f-1f28-4189-a452-10ed3fa78c7f" (UID: "2ed6288f-1f28-4189-a452-10ed3fa78c7f"). InnerVolumeSpecName "kube-api-access-zqvcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.477629 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-config-data" (OuterVolumeSpecName: "config-data") pod "2ed6288f-1f28-4189-a452-10ed3fa78c7f" (UID: "2ed6288f-1f28-4189-a452-10ed3fa78c7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.478294 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-846ec118-ed9e-4829-80fb-53a6edccba77" (OuterVolumeSpecName: "persistence") pod "2ed6288f-1f28-4189-a452-10ed3fa78c7f" (UID: "2ed6288f-1f28-4189-a452-10ed3fa78c7f"). InnerVolumeSpecName "pvc-846ec118-ed9e-4829-80fb-53a6edccba77". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.520244 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.520317 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-846ec118-ed9e-4829-80fb-53a6edccba77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-846ec118-ed9e-4829-80fb-53a6edccba77\") on node \"crc\" " Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.520452 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.520470 4751 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ed6288f-1f28-4189-a452-10ed3fa78c7f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.520483 4751 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ed6288f-1f28-4189-a452-10ed3fa78c7f-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.520496 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqvcx\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-kube-api-access-zqvcx\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.520511 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.538809 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-server-conf" (OuterVolumeSpecName: "server-conf") pod "2ed6288f-1f28-4189-a452-10ed3fa78c7f" (UID: "2ed6288f-1f28-4189-a452-10ed3fa78c7f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.559404 4751 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.559549 4751 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-846ec118-ed9e-4829-80fb-53a6edccba77" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-846ec118-ed9e-4829-80fb-53a6edccba77") on node "crc" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.622733 4751 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.622957 4751 reconciler_common.go:293] "Volume detached for volume \"pvc-846ec118-ed9e-4829-80fb-53a6edccba77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-846ec118-ed9e-4829-80fb-53a6edccba77\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.631516 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2ed6288f-1f28-4189-a452-10ed3fa78c7f" (UID: "2ed6288f-1f28-4189-a452-10ed3fa78c7f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.725020 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.872905 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"2ed6288f-1f28-4189-a452-10ed3fa78c7f","Type":"ContainerDied","Data":"14b244ff165ab8225e2f7204427c69fbfcfd61b1331f0eb3d778a03cddbe88d2"} Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.872977 4751 scope.go:117] "RemoveContainer" containerID="c118273bc1b7e17b96ef2802a30e188177f69c364926f8d0532e695e28d4ca05" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.872936 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.906921 4751 scope.go:117] "RemoveContainer" containerID="dc43aef27eee6e5555871ea3e140a0c234f05afe3ded956404826b8a2999ed23" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.933173 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.960234 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.975024 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 30 21:43:55 crc kubenswrapper[4751]: E0130 21:43:55.975589 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="191c5874-d3f0-4a2b-adcf-8ceed228e459" containerName="heat-engine" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.975602 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="191c5874-d3f0-4a2b-adcf-8ceed228e459" containerName="heat-engine" Jan 30 21:43:55 crc kubenswrapper[4751]: E0130 21:43:55.975619 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed6288f-1f28-4189-a452-10ed3fa78c7f" containerName="setup-container" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.975625 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed6288f-1f28-4189-a452-10ed3fa78c7f" containerName="setup-container" Jan 30 21:43:55 crc kubenswrapper[4751]: E0130 21:43:55.975641 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed6288f-1f28-4189-a452-10ed3fa78c7f" containerName="rabbitmq" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.975647 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed6288f-1f28-4189-a452-10ed3fa78c7f" containerName="rabbitmq" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.975881 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ed6288f-1f28-4189-a452-10ed3fa78c7f" containerName="rabbitmq" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.975896 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="191c5874-d3f0-4a2b-adcf-8ceed228e459" containerName="heat-engine" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.977291 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.025197 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ed6288f-1f28-4189-a452-10ed3fa78c7f" path="/var/lib/kubelet/pods/2ed6288f-1f28-4189-a452-10ed3fa78c7f/volumes" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.027385 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.030571 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75kgx\" (UniqueName: \"kubernetes.io/projected/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-kube-api-access-75kgx\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.030642 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.030682 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.030714 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-server-conf\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.030756 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.030794 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-pod-info\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.030819 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.030835 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-config-data\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.030854 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-846ec118-ed9e-4829-80fb-53a6edccba77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-846ec118-ed9e-4829-80fb-53a6edccba77\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.030897 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.030918 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.133685 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-pod-info\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.133729 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.133758 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-config-data\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.133781 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-846ec118-ed9e-4829-80fb-53a6edccba77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-846ec118-ed9e-4829-80fb-53a6edccba77\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.133851 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.133883 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.133995 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75kgx\" (UniqueName: \"kubernetes.io/projected/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-kube-api-access-75kgx\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.134060 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.134121 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.134144 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-server-conf\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.134205 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.134690 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.135768 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.136128 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-server-conf\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.136238 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.137460 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-config-data\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.139088 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.139491 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.139516 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-846ec118-ed9e-4829-80fb-53a6edccba77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-846ec118-ed9e-4829-80fb-53a6edccba77\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2001e391e04ee7d0edfbd20e4205f1b60c57288335d512357b8e0f2ce2f191a2/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.139696 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.141472 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-pod-info\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.141886 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.154586 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75kgx\" (UniqueName: \"kubernetes.io/projected/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-kube-api-access-75kgx\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.223235 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-846ec118-ed9e-4829-80fb-53a6edccba77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-846ec118-ed9e-4829-80fb-53a6edccba77\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.329535 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.931467 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 30 21:43:57 crc kubenswrapper[4751]: I0130 21:43:57.912024 4751 generic.go:334] "Generic (PLEG): container finished" podID="ee22e47a-e31f-4d01-8eec-e4d24dbb02ca" containerID="e68cf53ba13bd45baafd16d7ceca811457154cd522453b22e57f6a2054d3b023" exitCode=0 Jan 30 21:43:57 crc kubenswrapper[4751]: I0130 21:43:57.912448 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-q9ws6" event={"ID":"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca","Type":"ContainerDied","Data":"e68cf53ba13bd45baafd16d7ceca811457154cd522453b22e57f6a2054d3b023"} Jan 30 21:43:57 crc kubenswrapper[4751]: I0130 21:43:57.915180 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"279dd57b-8f7d-4730-a9ee-cf124f8c0d52","Type":"ContainerStarted","Data":"f0e17415c02fe28ff4dbf786bebc3e608f6469c1274a4f23432aca898f8b98b7"} Jan 30 21:43:58 crc kubenswrapper[4751]: I0130 21:43:58.926555 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"279dd57b-8f7d-4730-a9ee-cf124f8c0d52","Type":"ContainerStarted","Data":"0e7b5befd33a8603a2fbcf0bd4a03072a19d26c3f4e7aad7020c6d3a05574310"} Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.323335 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.447239 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-scripts\") pod \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.447636 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-config-data\") pod \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.447759 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-combined-ca-bundle\") pod \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.447850 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45dc7\" (UniqueName: \"kubernetes.io/projected/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-kube-api-access-45dc7\") pod \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.454290 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-scripts" (OuterVolumeSpecName: "scripts") pod "ee22e47a-e31f-4d01-8eec-e4d24dbb02ca" (UID: "ee22e47a-e31f-4d01-8eec-e4d24dbb02ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.455781 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-kube-api-access-45dc7" (OuterVolumeSpecName: "kube-api-access-45dc7") pod "ee22e47a-e31f-4d01-8eec-e4d24dbb02ca" (UID: "ee22e47a-e31f-4d01-8eec-e4d24dbb02ca"). InnerVolumeSpecName "kube-api-access-45dc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.496525 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-config-data" (OuterVolumeSpecName: "config-data") pod "ee22e47a-e31f-4d01-8eec-e4d24dbb02ca" (UID: "ee22e47a-e31f-4d01-8eec-e4d24dbb02ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.505654 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee22e47a-e31f-4d01-8eec-e4d24dbb02ca" (UID: "ee22e47a-e31f-4d01-8eec-e4d24dbb02ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.551886 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45dc7\" (UniqueName: \"kubernetes.io/projected/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-kube-api-access-45dc7\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.551945 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.551958 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.551969 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.941592 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.941593 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-q9ws6" event={"ID":"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca","Type":"ContainerDied","Data":"4e16bcad8b0611f5c448e56540c19d9dd39736dcd1a42341bea3573a92a46e77"} Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.943447 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e16bcad8b0611f5c448e56540c19d9dd39736dcd1a42341bea3573a92a46e77" Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.943706 4751 generic.go:334] "Generic (PLEG): container finished" podID="37b91419-687f-4907-888d-9344d1e8602a" containerID="a3b766c6a4c7e111f675065738bc5eed3b92e29a07eaf7151c0c434f41fa2116" exitCode=0 Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.943759 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" event={"ID":"37b91419-687f-4907-888d-9344d1e8602a","Type":"ContainerDied","Data":"a3b766c6a4c7e111f675065738bc5eed3b92e29a07eaf7151c0c434f41fa2116"} Jan 30 21:44:00 crc kubenswrapper[4751]: I0130 21:44:00.420614 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 30 21:44:00 crc kubenswrapper[4751]: I0130 21:44:00.421229 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-api" containerID="cri-o://c5cd4a1a71248838ed9e49a494539bb4ff200313cd7269c902fef1ca59c59df9" gracePeriod=30 Jan 30 21:44:00 crc kubenswrapper[4751]: I0130 21:44:00.421300 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-notifier" containerID="cri-o://da16b64f8e713580f787cbd2e4719bba2e477f964b675f8a506b95c51441b6ab" gracePeriod=30 Jan 30 21:44:00 crc kubenswrapper[4751]: I0130 21:44:00.421439 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-evaluator" containerID="cri-o://a79387816d9bbf1412e00ab664a6adb7838bd04ef1275531e7ddfc43ff77d9fd" gracePeriod=30 Jan 30 21:44:00 crc kubenswrapper[4751]: I0130 21:44:00.421779 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-listener" containerID="cri-o://41971d5f461004d0df485ffcf9187243aae3bb043417ff8edc3c83dd2dec4187" gracePeriod=30 Jan 30 21:44:00 crc kubenswrapper[4751]: I0130 21:44:00.956489 4751 generic.go:334] "Generic (PLEG): container finished" podID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerID="c5cd4a1a71248838ed9e49a494539bb4ff200313cd7269c902fef1ca59c59df9" exitCode=0 Jan 30 21:44:00 crc kubenswrapper[4751]: I0130 21:44:00.956569 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c6b6e10-77a2-49e7-a4eb-25af482bfab8","Type":"ContainerDied","Data":"c5cd4a1a71248838ed9e49a494539bb4ff200313cd7269c902fef1ca59c59df9"} Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.513442 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.598560 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-inventory\") pod \"37b91419-687f-4907-888d-9344d1e8602a\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.598761 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n6xs\" (UniqueName: \"kubernetes.io/projected/37b91419-687f-4907-888d-9344d1e8602a-kube-api-access-5n6xs\") pod \"37b91419-687f-4907-888d-9344d1e8602a\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.598899 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-ssh-key-openstack-edpm-ipam\") pod \"37b91419-687f-4907-888d-9344d1e8602a\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.598968 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-repo-setup-combined-ca-bundle\") pod \"37b91419-687f-4907-888d-9344d1e8602a\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.603801 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "37b91419-687f-4907-888d-9344d1e8602a" (UID: "37b91419-687f-4907-888d-9344d1e8602a"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.618916 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37b91419-687f-4907-888d-9344d1e8602a-kube-api-access-5n6xs" (OuterVolumeSpecName: "kube-api-access-5n6xs") pod "37b91419-687f-4907-888d-9344d1e8602a" (UID: "37b91419-687f-4907-888d-9344d1e8602a"). InnerVolumeSpecName "kube-api-access-5n6xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.635122 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "37b91419-687f-4907-888d-9344d1e8602a" (UID: "37b91419-687f-4907-888d-9344d1e8602a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.638167 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-inventory" (OuterVolumeSpecName: "inventory") pod "37b91419-687f-4907-888d-9344d1e8602a" (UID: "37b91419-687f-4907-888d-9344d1e8602a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.701730 4751 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.701770 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.701780 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n6xs\" (UniqueName: \"kubernetes.io/projected/37b91419-687f-4907-888d-9344d1e8602a-kube-api-access-5n6xs\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.701790 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.971062 4751 generic.go:334] "Generic (PLEG): container finished" podID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerID="a79387816d9bbf1412e00ab664a6adb7838bd04ef1275531e7ddfc43ff77d9fd" exitCode=0 Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.971141 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c6b6e10-77a2-49e7-a4eb-25af482bfab8","Type":"ContainerDied","Data":"a79387816d9bbf1412e00ab664a6adb7838bd04ef1275531e7ddfc43ff77d9fd"} Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.973116 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" event={"ID":"37b91419-687f-4907-888d-9344d1e8602a","Type":"ContainerDied","Data":"a3e1ec6aefe3e881897f0787f3cc0457ad03a27579caa9bb077b0717aa35bb28"} Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.973155 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3e1ec6aefe3e881897f0787f3cc0457ad03a27579caa9bb077b0717aa35bb28" Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.973134 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.048778 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp"] Jan 30 21:44:02 crc kubenswrapper[4751]: E0130 21:44:02.049516 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b91419-687f-4907-888d-9344d1e8602a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.049539 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b91419-687f-4907-888d-9344d1e8602a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 21:44:02 crc kubenswrapper[4751]: E0130 21:44:02.049602 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee22e47a-e31f-4d01-8eec-e4d24dbb02ca" containerName="aodh-db-sync" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.049611 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee22e47a-e31f-4d01-8eec-e4d24dbb02ca" containerName="aodh-db-sync" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.049883 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="37b91419-687f-4907-888d-9344d1e8602a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.049920 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee22e47a-e31f-4d01-8eec-e4d24dbb02ca" containerName="aodh-db-sync" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.050972 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.053543 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.053769 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.054052 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.055181 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.059836 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp"] Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.213823 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4b9ecbd-4cf2-4554-b209-d7a421499f08-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2bbbp\" (UID: \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.214244 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4b9ecbd-4cf2-4554-b209-d7a421499f08-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2bbbp\" (UID: \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.214548 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vhpn\" (UniqueName: \"kubernetes.io/projected/a4b9ecbd-4cf2-4554-b209-d7a421499f08-kube-api-access-2vhpn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2bbbp\" (UID: \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.317351 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4b9ecbd-4cf2-4554-b209-d7a421499f08-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2bbbp\" (UID: \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.317690 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4b9ecbd-4cf2-4554-b209-d7a421499f08-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2bbbp\" (UID: \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.317895 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vhpn\" (UniqueName: \"kubernetes.io/projected/a4b9ecbd-4cf2-4554-b209-d7a421499f08-kube-api-access-2vhpn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2bbbp\" (UID: \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.321007 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4b9ecbd-4cf2-4554-b209-d7a421499f08-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2bbbp\" (UID: \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.321980 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4b9ecbd-4cf2-4554-b209-d7a421499f08-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2bbbp\" (UID: \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.341621 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vhpn\" (UniqueName: \"kubernetes.io/projected/a4b9ecbd-4cf2-4554-b209-d7a421499f08-kube-api-access-2vhpn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2bbbp\" (UID: \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.377758 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.967112 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp"] Jan 30 21:44:03 crc kubenswrapper[4751]: I0130 21:44:02.999475 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" event={"ID":"a4b9ecbd-4cf2-4554-b209-d7a421499f08","Type":"ContainerStarted","Data":"5328f5a63580c9f7ec213c84186e982dd1d8995e50601662fa82a7d7034722f7"} Jan 30 21:44:04 crc kubenswrapper[4751]: I0130 21:44:04.018173 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" event={"ID":"a4b9ecbd-4cf2-4554-b209-d7a421499f08","Type":"ContainerStarted","Data":"3fb3743686fabe731a3892d66238c8c6b9475df17820dd33a4c69d432514da95"} Jan 30 21:44:04 crc kubenswrapper[4751]: I0130 21:44:04.076017 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" podStartSLOduration=1.669303446 podStartE2EDuration="2.075997556s" podCreationTimestamp="2026-01-30 21:44:02 +0000 UTC" firstStartedPulling="2026-01-30 21:44:02.961039415 +0000 UTC m=+1781.706862074" lastFinishedPulling="2026-01-30 21:44:03.367733535 +0000 UTC m=+1782.113556184" observedRunningTime="2026-01-30 21:44:04.059426369 +0000 UTC m=+1782.805249008" watchObservedRunningTime="2026-01-30 21:44:04.075997556 +0000 UTC m=+1782.821820205" Jan 30 21:44:06 crc kubenswrapper[4751]: I0130 21:44:06.051021 4751 generic.go:334] "Generic (PLEG): container finished" podID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerID="da16b64f8e713580f787cbd2e4719bba2e477f964b675f8a506b95c51441b6ab" exitCode=0 Jan 30 21:44:06 crc kubenswrapper[4751]: I0130 21:44:06.051084 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c6b6e10-77a2-49e7-a4eb-25af482bfab8","Type":"ContainerDied","Data":"da16b64f8e713580f787cbd2e4719bba2e477f964b675f8a506b95c51441b6ab"} Jan 30 21:44:07 crc kubenswrapper[4751]: I0130 21:44:07.073026 4751 generic.go:334] "Generic (PLEG): container finished" podID="a4b9ecbd-4cf2-4554-b209-d7a421499f08" containerID="3fb3743686fabe731a3892d66238c8c6b9475df17820dd33a4c69d432514da95" exitCode=0 Jan 30 21:44:07 crc kubenswrapper[4751]: I0130 21:44:07.073278 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" event={"ID":"a4b9ecbd-4cf2-4554-b209-d7a421499f08","Type":"ContainerDied","Data":"3fb3743686fabe731a3892d66238c8c6b9475df17820dd33a4c69d432514da95"} Jan 30 21:44:07 crc kubenswrapper[4751]: I0130 21:44:07.976358 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:44:07 crc kubenswrapper[4751]: E0130 21:44:07.976732 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.686755 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.793079 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4b9ecbd-4cf2-4554-b209-d7a421499f08-inventory\") pod \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\" (UID: \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\") " Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.793145 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4b9ecbd-4cf2-4554-b209-d7a421499f08-ssh-key-openstack-edpm-ipam\") pod \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\" (UID: \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\") " Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.793240 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vhpn\" (UniqueName: \"kubernetes.io/projected/a4b9ecbd-4cf2-4554-b209-d7a421499f08-kube-api-access-2vhpn\") pod \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\" (UID: \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\") " Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.819678 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4b9ecbd-4cf2-4554-b209-d7a421499f08-kube-api-access-2vhpn" (OuterVolumeSpecName: "kube-api-access-2vhpn") pod "a4b9ecbd-4cf2-4554-b209-d7a421499f08" (UID: "a4b9ecbd-4cf2-4554-b209-d7a421499f08"). InnerVolumeSpecName "kube-api-access-2vhpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.828117 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4b9ecbd-4cf2-4554-b209-d7a421499f08-inventory" (OuterVolumeSpecName: "inventory") pod "a4b9ecbd-4cf2-4554-b209-d7a421499f08" (UID: "a4b9ecbd-4cf2-4554-b209-d7a421499f08"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.839277 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4b9ecbd-4cf2-4554-b209-d7a421499f08-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a4b9ecbd-4cf2-4554-b209-d7a421499f08" (UID: "a4b9ecbd-4cf2-4554-b209-d7a421499f08"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.896575 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4b9ecbd-4cf2-4554-b209-d7a421499f08-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.896607 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4b9ecbd-4cf2-4554-b209-d7a421499f08-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.896619 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vhpn\" (UniqueName: \"kubernetes.io/projected/a4b9ecbd-4cf2-4554-b209-d7a421499f08-kube-api-access-2vhpn\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.921031 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.999256 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-public-tls-certs\") pod \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.999308 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-internal-tls-certs\") pod \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.999388 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cgnk\" (UniqueName: \"kubernetes.io/projected/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-kube-api-access-6cgnk\") pod \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.999413 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-scripts\") pod \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:08.999708 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-combined-ca-bundle\") pod \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:08.999748 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-config-data\") pod \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.005561 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-kube-api-access-6cgnk" (OuterVolumeSpecName: "kube-api-access-6cgnk") pod "2c6b6e10-77a2-49e7-a4eb-25af482bfab8" (UID: "2c6b6e10-77a2-49e7-a4eb-25af482bfab8"). InnerVolumeSpecName "kube-api-access-6cgnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.005835 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-scripts" (OuterVolumeSpecName: "scripts") pod "2c6b6e10-77a2-49e7-a4eb-25af482bfab8" (UID: "2c6b6e10-77a2-49e7-a4eb-25af482bfab8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.077699 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2c6b6e10-77a2-49e7-a4eb-25af482bfab8" (UID: "2c6b6e10-77a2-49e7-a4eb-25af482bfab8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.094628 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2c6b6e10-77a2-49e7-a4eb-25af482bfab8" (UID: "2c6b6e10-77a2-49e7-a4eb-25af482bfab8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.100483 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" event={"ID":"a4b9ecbd-4cf2-4554-b209-d7a421499f08","Type":"ContainerDied","Data":"5328f5a63580c9f7ec213c84186e982dd1d8995e50601662fa82a7d7034722f7"} Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.100519 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5328f5a63580c9f7ec213c84186e982dd1d8995e50601662fa82a7d7034722f7" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.100575 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.108963 4751 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.109005 4751 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.109020 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cgnk\" (UniqueName: \"kubernetes.io/projected/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-kube-api-access-6cgnk\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.109036 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.119048 4751 generic.go:334] "Generic (PLEG): container finished" podID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerID="41971d5f461004d0df485ffcf9187243aae3bb043417ff8edc3c83dd2dec4187" exitCode=0 Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.119094 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c6b6e10-77a2-49e7-a4eb-25af482bfab8","Type":"ContainerDied","Data":"41971d5f461004d0df485ffcf9187243aae3bb043417ff8edc3c83dd2dec4187"} Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.119120 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c6b6e10-77a2-49e7-a4eb-25af482bfab8","Type":"ContainerDied","Data":"7283a35e10e58cb6fd870643eadfa452a5d15d1f89aa7955246ef678a98a324c"} Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.119136 4751 scope.go:117] "RemoveContainer" containerID="41971d5f461004d0df485ffcf9187243aae3bb043417ff8edc3c83dd2dec4187" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.119311 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.154374 4751 scope.go:117] "RemoveContainer" containerID="da16b64f8e713580f787cbd2e4719bba2e477f964b675f8a506b95c51441b6ab" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.154379 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c6b6e10-77a2-49e7-a4eb-25af482bfab8" (UID: "2c6b6e10-77a2-49e7-a4eb-25af482bfab8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.156526 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-config-data" (OuterVolumeSpecName: "config-data") pod "2c6b6e10-77a2-49e7-a4eb-25af482bfab8" (UID: "2c6b6e10-77a2-49e7-a4eb-25af482bfab8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.183378 4751 scope.go:117] "RemoveContainer" containerID="a79387816d9bbf1412e00ab664a6adb7838bd04ef1275531e7ddfc43ff77d9fd" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.199069 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl"] Jan 30 21:44:09 crc kubenswrapper[4751]: E0130 21:44:09.199592 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-api" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.199610 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-api" Jan 30 21:44:09 crc kubenswrapper[4751]: E0130 21:44:09.199630 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-listener" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.199636 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-listener" Jan 30 21:44:09 crc kubenswrapper[4751]: E0130 21:44:09.199676 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-notifier" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.199685 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-notifier" Jan 30 21:44:09 crc kubenswrapper[4751]: E0130 21:44:09.199705 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4b9ecbd-4cf2-4554-b209-d7a421499f08" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.199713 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b9ecbd-4cf2-4554-b209-d7a421499f08" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 21:44:09 crc kubenswrapper[4751]: E0130 21:44:09.199725 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-evaluator" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.199731 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-evaluator" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.199930 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-notifier" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.199948 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-api" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.199958 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-listener" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.199968 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-evaluator" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.199997 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4b9ecbd-4cf2-4554-b209-d7a421499f08" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.200944 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.209504 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl"] Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.210224 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.210480 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.210615 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.210981 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.211001 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.211250 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.216009 4751 scope.go:117] "RemoveContainer" containerID="c5cd4a1a71248838ed9e49a494539bb4ff200313cd7269c902fef1ca59c59df9" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.241518 4751 scope.go:117] "RemoveContainer" containerID="41971d5f461004d0df485ffcf9187243aae3bb043417ff8edc3c83dd2dec4187" Jan 30 21:44:09 crc kubenswrapper[4751]: E0130 21:44:09.241894 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41971d5f461004d0df485ffcf9187243aae3bb043417ff8edc3c83dd2dec4187\": container with ID starting with 41971d5f461004d0df485ffcf9187243aae3bb043417ff8edc3c83dd2dec4187 not found: ID does not exist" containerID="41971d5f461004d0df485ffcf9187243aae3bb043417ff8edc3c83dd2dec4187" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.241930 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41971d5f461004d0df485ffcf9187243aae3bb043417ff8edc3c83dd2dec4187"} err="failed to get container status \"41971d5f461004d0df485ffcf9187243aae3bb043417ff8edc3c83dd2dec4187\": rpc error: code = NotFound desc = could not find container \"41971d5f461004d0df485ffcf9187243aae3bb043417ff8edc3c83dd2dec4187\": container with ID starting with 41971d5f461004d0df485ffcf9187243aae3bb043417ff8edc3c83dd2dec4187 not found: ID does not exist" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.241955 4751 scope.go:117] "RemoveContainer" containerID="da16b64f8e713580f787cbd2e4719bba2e477f964b675f8a506b95c51441b6ab" Jan 30 21:44:09 crc kubenswrapper[4751]: E0130 21:44:09.242318 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da16b64f8e713580f787cbd2e4719bba2e477f964b675f8a506b95c51441b6ab\": container with ID starting with da16b64f8e713580f787cbd2e4719bba2e477f964b675f8a506b95c51441b6ab not found: ID does not exist" containerID="da16b64f8e713580f787cbd2e4719bba2e477f964b675f8a506b95c51441b6ab" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.242354 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da16b64f8e713580f787cbd2e4719bba2e477f964b675f8a506b95c51441b6ab"} err="failed to get container status \"da16b64f8e713580f787cbd2e4719bba2e477f964b675f8a506b95c51441b6ab\": rpc error: code = NotFound desc = could not find container \"da16b64f8e713580f787cbd2e4719bba2e477f964b675f8a506b95c51441b6ab\": container with ID starting with da16b64f8e713580f787cbd2e4719bba2e477f964b675f8a506b95c51441b6ab not found: ID does not exist" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.242368 4751 scope.go:117] "RemoveContainer" containerID="a79387816d9bbf1412e00ab664a6adb7838bd04ef1275531e7ddfc43ff77d9fd" Jan 30 21:44:09 crc kubenswrapper[4751]: E0130 21:44:09.242586 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a79387816d9bbf1412e00ab664a6adb7838bd04ef1275531e7ddfc43ff77d9fd\": container with ID starting with a79387816d9bbf1412e00ab664a6adb7838bd04ef1275531e7ddfc43ff77d9fd not found: ID does not exist" containerID="a79387816d9bbf1412e00ab664a6adb7838bd04ef1275531e7ddfc43ff77d9fd" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.242608 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a79387816d9bbf1412e00ab664a6adb7838bd04ef1275531e7ddfc43ff77d9fd"} err="failed to get container status \"a79387816d9bbf1412e00ab664a6adb7838bd04ef1275531e7ddfc43ff77d9fd\": rpc error: code = NotFound desc = could not find container \"a79387816d9bbf1412e00ab664a6adb7838bd04ef1275531e7ddfc43ff77d9fd\": container with ID starting with a79387816d9bbf1412e00ab664a6adb7838bd04ef1275531e7ddfc43ff77d9fd not found: ID does not exist" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.242621 4751 scope.go:117] "RemoveContainer" containerID="c5cd4a1a71248838ed9e49a494539bb4ff200313cd7269c902fef1ca59c59df9" Jan 30 21:44:09 crc kubenswrapper[4751]: E0130 21:44:09.242883 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5cd4a1a71248838ed9e49a494539bb4ff200313cd7269c902fef1ca59c59df9\": container with ID starting with c5cd4a1a71248838ed9e49a494539bb4ff200313cd7269c902fef1ca59c59df9 not found: ID does not exist" containerID="c5cd4a1a71248838ed9e49a494539bb4ff200313cd7269c902fef1ca59c59df9" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.242898 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5cd4a1a71248838ed9e49a494539bb4ff200313cd7269c902fef1ca59c59df9"} err="failed to get container status \"c5cd4a1a71248838ed9e49a494539bb4ff200313cd7269c902fef1ca59c59df9\": rpc error: code = NotFound desc = could not find container \"c5cd4a1a71248838ed9e49a494539bb4ff200313cd7269c902fef1ca59c59df9\": container with ID starting with c5cd4a1a71248838ed9e49a494539bb4ff200313cd7269c902fef1ca59c59df9 not found: ID does not exist" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.313462 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6xjb\" (UniqueName: \"kubernetes.io/projected/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-kube-api-access-x6xjb\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.313526 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.313883 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.314072 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.416224 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6xjb\" (UniqueName: \"kubernetes.io/projected/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-kube-api-access-x6xjb\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.416343 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.416486 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.416579 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.419910 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.421275 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.422658 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.437498 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6xjb\" (UniqueName: \"kubernetes.io/projected/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-kube-api-access-x6xjb\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.464944 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.483294 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.496711 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.500423 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.505724 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-k9tjh" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.505956 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.506125 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.509818 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.510152 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.511842 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.526022 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.619970 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-config-data\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.620032 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-internal-tls-certs\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.620111 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-public-tls-certs\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.620202 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt5z8\" (UniqueName: \"kubernetes.io/projected/0c9eccf2-9252-4f35-9aff-56f0e15102a1-kube-api-access-qt5z8\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.620249 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-scripts\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.620611 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-combined-ca-bundle\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.724004 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-combined-ca-bundle\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.724449 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-config-data\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.724480 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-internal-tls-certs\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.724524 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-public-tls-certs\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.724576 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt5z8\" (UniqueName: \"kubernetes.io/projected/0c9eccf2-9252-4f35-9aff-56f0e15102a1-kube-api-access-qt5z8\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.724603 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-scripts\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.730886 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-scripts\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.731014 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-combined-ca-bundle\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.731621 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-config-data\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.732877 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-internal-tls-certs\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.745853 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt5z8\" (UniqueName: \"kubernetes.io/projected/0c9eccf2-9252-4f35-9aff-56f0e15102a1-kube-api-access-qt5z8\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.751518 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-public-tls-certs\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.969661 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.988418 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" path="/var/lib/kubelet/pods/2c6b6e10-77a2-49e7-a4eb-25af482bfab8/volumes" Jan 30 21:44:10 crc kubenswrapper[4751]: I0130 21:44:10.174099 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl"] Jan 30 21:44:10 crc kubenswrapper[4751]: W0130 21:44:10.563442 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c9eccf2_9252_4f35_9aff_56f0e15102a1.slice/crio-e4e3ec36fc504c99439e3712fb47d464abb15c2638d8103de754c0765bb55f02 WatchSource:0}: Error finding container e4e3ec36fc504c99439e3712fb47d464abb15c2638d8103de754c0765bb55f02: Status 404 returned error can't find the container with id e4e3ec36fc504c99439e3712fb47d464abb15c2638d8103de754c0765bb55f02 Jan 30 21:44:10 crc kubenswrapper[4751]: I0130 21:44:10.571591 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 30 21:44:11 crc kubenswrapper[4751]: I0130 21:44:11.185932 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" event={"ID":"25d1f8e8-75ed-46ae-b674-87f34c4edbfa","Type":"ContainerStarted","Data":"5fb425b25c8902fe60e5dcd58df1f879542305f303c7a43c344cbd78332f0ba4"} Jan 30 21:44:11 crc kubenswrapper[4751]: I0130 21:44:11.186233 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" event={"ID":"25d1f8e8-75ed-46ae-b674-87f34c4edbfa","Type":"ContainerStarted","Data":"51f8374b96e74508b8ea161ccefb7ec2d95c6112bacdf4605ef5155ad9ff2a2e"} Jan 30 21:44:11 crc kubenswrapper[4751]: I0130 21:44:11.189255 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0c9eccf2-9252-4f35-9aff-56f0e15102a1","Type":"ContainerStarted","Data":"428802cdcdc1078d2fc15c696d718c7a1621a6f2124b81b408d19f3109500e67"} Jan 30 21:44:11 crc kubenswrapper[4751]: I0130 21:44:11.189282 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0c9eccf2-9252-4f35-9aff-56f0e15102a1","Type":"ContainerStarted","Data":"e4e3ec36fc504c99439e3712fb47d464abb15c2638d8103de754c0765bb55f02"} Jan 30 21:44:11 crc kubenswrapper[4751]: I0130 21:44:11.240122 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" podStartSLOduration=1.8336995680000001 podStartE2EDuration="2.239966546s" podCreationTimestamp="2026-01-30 21:44:09 +0000 UTC" firstStartedPulling="2026-01-30 21:44:10.171168661 +0000 UTC m=+1788.916991310" lastFinishedPulling="2026-01-30 21:44:10.577435639 +0000 UTC m=+1789.323258288" observedRunningTime="2026-01-30 21:44:11.228866167 +0000 UTC m=+1789.974688816" watchObservedRunningTime="2026-01-30 21:44:11.239966546 +0000 UTC m=+1789.985789185" Jan 30 21:44:13 crc kubenswrapper[4751]: I0130 21:44:13.222651 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0c9eccf2-9252-4f35-9aff-56f0e15102a1","Type":"ContainerStarted","Data":"d8eaf98b1b6bb0b06d3301fd07ef309c4552c2d25f0cebdc4f98ccc76a38ceb1"} Jan 30 21:44:14 crc kubenswrapper[4751]: I0130 21:44:14.235253 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0c9eccf2-9252-4f35-9aff-56f0e15102a1","Type":"ContainerStarted","Data":"72b8e80ad3b6064951d95698af079dbf14fe74e8b3f9e58ebb2db863341fa90a"} Jan 30 21:44:15 crc kubenswrapper[4751]: I0130 21:44:15.249127 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0c9eccf2-9252-4f35-9aff-56f0e15102a1","Type":"ContainerStarted","Data":"965553b39109cd9b09e2aa0be79059e6be3054cd57f0984afa3f7550e512cf34"} Jan 30 21:44:15 crc kubenswrapper[4751]: I0130 21:44:15.311591 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.508449134 podStartE2EDuration="6.311572159s" podCreationTimestamp="2026-01-30 21:44:09 +0000 UTC" firstStartedPulling="2026-01-30 21:44:10.577722577 +0000 UTC m=+1789.323545226" lastFinishedPulling="2026-01-30 21:44:14.380845602 +0000 UTC m=+1793.126668251" observedRunningTime="2026-01-30 21:44:15.288033754 +0000 UTC m=+1794.033856403" watchObservedRunningTime="2026-01-30 21:44:15.311572159 +0000 UTC m=+1794.057394808" Jan 30 21:44:19 crc kubenswrapper[4751]: I0130 21:44:19.976449 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:44:19 crc kubenswrapper[4751]: E0130 21:44:19.977877 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:44:23 crc kubenswrapper[4751]: I0130 21:44:23.615204 4751 scope.go:117] "RemoveContainer" containerID="7254994e57fe71a7702af83ba12ef3a837f896f1d6e6e6a7dbba9ca54cdfc1ad" Jan 30 21:44:31 crc kubenswrapper[4751]: I0130 21:44:31.435177 4751 generic.go:334] "Generic (PLEG): container finished" podID="279dd57b-8f7d-4730-a9ee-cf124f8c0d52" containerID="0e7b5befd33a8603a2fbcf0bd4a03072a19d26c3f4e7aad7020c6d3a05574310" exitCode=0 Jan 30 21:44:31 crc kubenswrapper[4751]: I0130 21:44:31.435288 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"279dd57b-8f7d-4730-a9ee-cf124f8c0d52","Type":"ContainerDied","Data":"0e7b5befd33a8603a2fbcf0bd4a03072a19d26c3f4e7aad7020c6d3a05574310"} Jan 30 21:44:32 crc kubenswrapper[4751]: I0130 21:44:32.448674 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"279dd57b-8f7d-4730-a9ee-cf124f8c0d52","Type":"ContainerStarted","Data":"e2065c8fac7270fc3baadedcaef0bc9456870e9d783937e9b9d6212f3ef535cc"} Jan 30 21:44:32 crc kubenswrapper[4751]: I0130 21:44:32.449463 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 30 21:44:32 crc kubenswrapper[4751]: I0130 21:44:32.479867 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=37.47984932 podStartE2EDuration="37.47984932s" podCreationTimestamp="2026-01-30 21:43:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:44:32.471807663 +0000 UTC m=+1811.217630332" watchObservedRunningTime="2026-01-30 21:44:32.47984932 +0000 UTC m=+1811.225671969" Jan 30 21:44:34 crc kubenswrapper[4751]: I0130 21:44:34.976869 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:44:35 crc kubenswrapper[4751]: I0130 21:44:35.490754 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"e83ca35bd085af955b4b3e0476bcb9169304b85473995bcb3f76de779bdcffb0"} Jan 30 21:44:46 crc kubenswrapper[4751]: I0130 21:44:46.334600 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 30 21:44:46 crc kubenswrapper[4751]: I0130 21:44:46.413670 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 21:44:50 crc kubenswrapper[4751]: I0130 21:44:50.698651 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="f18b5d57-5b05-4ef0-bae3-68938e094510" containerName="rabbitmq" containerID="cri-o://fc68458705d52ffe188e292effdc48adcef036c4730d9da5c5e79dd0196fceee" gracePeriod=604796 Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.381782 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.522043 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-erlang-cookie\") pod \"f18b5d57-5b05-4ef0-bae3-68938e094510\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.522662 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-server-conf\") pod \"f18b5d57-5b05-4ef0-bae3-68938e094510\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.522701 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-tls\") pod \"f18b5d57-5b05-4ef0-bae3-68938e094510\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.522734 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-plugins-conf\") pod \"f18b5d57-5b05-4ef0-bae3-68938e094510\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.522766 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f18b5d57-5b05-4ef0-bae3-68938e094510-pod-info\") pod \"f18b5d57-5b05-4ef0-bae3-68938e094510\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.522823 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rt94\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-kube-api-access-8rt94\") pod \"f18b5d57-5b05-4ef0-bae3-68938e094510\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.523077 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f18b5d57-5b05-4ef0-bae3-68938e094510" (UID: "f18b5d57-5b05-4ef0-bae3-68938e094510"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.523306 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\") pod \"f18b5d57-5b05-4ef0-bae3-68938e094510\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.523406 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f18b5d57-5b05-4ef0-bae3-68938e094510" (UID: "f18b5d57-5b05-4ef0-bae3-68938e094510"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.523438 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-config-data\") pod \"f18b5d57-5b05-4ef0-bae3-68938e094510\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.523526 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-confd\") pod \"f18b5d57-5b05-4ef0-bae3-68938e094510\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.523560 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-plugins\") pod \"f18b5d57-5b05-4ef0-bae3-68938e094510\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.523597 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f18b5d57-5b05-4ef0-bae3-68938e094510-erlang-cookie-secret\") pod \"f18b5d57-5b05-4ef0-bae3-68938e094510\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.523919 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f18b5d57-5b05-4ef0-bae3-68938e094510" (UID: "f18b5d57-5b05-4ef0-bae3-68938e094510"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.524739 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.524756 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.524771 4751 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.531390 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-kube-api-access-8rt94" (OuterVolumeSpecName: "kube-api-access-8rt94") pod "f18b5d57-5b05-4ef0-bae3-68938e094510" (UID: "f18b5d57-5b05-4ef0-bae3-68938e094510"). InnerVolumeSpecName "kube-api-access-8rt94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.531674 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f18b5d57-5b05-4ef0-bae3-68938e094510" (UID: "f18b5d57-5b05-4ef0-bae3-68938e094510"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.533289 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f18b5d57-5b05-4ef0-bae3-68938e094510-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f18b5d57-5b05-4ef0-bae3-68938e094510" (UID: "f18b5d57-5b05-4ef0-bae3-68938e094510"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.541637 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f18b5d57-5b05-4ef0-bae3-68938e094510-pod-info" (OuterVolumeSpecName: "pod-info") pod "f18b5d57-5b05-4ef0-bae3-68938e094510" (UID: "f18b5d57-5b05-4ef0-bae3-68938e094510"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.550007 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef" (OuterVolumeSpecName: "persistence") pod "f18b5d57-5b05-4ef0-bae3-68938e094510" (UID: "f18b5d57-5b05-4ef0-bae3-68938e094510"). InnerVolumeSpecName "pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.558019 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-config-data" (OuterVolumeSpecName: "config-data") pod "f18b5d57-5b05-4ef0-bae3-68938e094510" (UID: "f18b5d57-5b05-4ef0-bae3-68938e094510"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.590202 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-server-conf" (OuterVolumeSpecName: "server-conf") pod "f18b5d57-5b05-4ef0-bae3-68938e094510" (UID: "f18b5d57-5b05-4ef0-bae3-68938e094510"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.629162 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.629204 4751 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f18b5d57-5b05-4ef0-bae3-68938e094510-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.629219 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rt94\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-kube-api-access-8rt94\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.629259 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\") on node \"crc\" " Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.629274 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.629289 4751 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f18b5d57-5b05-4ef0-bae3-68938e094510-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.629302 4751 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.687011 4751 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.687231 4751 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef") on node "crc" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.732676 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f18b5d57-5b05-4ef0-bae3-68938e094510" (UID: "f18b5d57-5b05-4ef0-bae3-68938e094510"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.733469 4751 reconciler_common.go:293] "Volume detached for volume \"pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.733499 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.772566 4751 generic.go:334] "Generic (PLEG): container finished" podID="f18b5d57-5b05-4ef0-bae3-68938e094510" containerID="fc68458705d52ffe188e292effdc48adcef036c4730d9da5c5e79dd0196fceee" exitCode=0 Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.772618 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f18b5d57-5b05-4ef0-bae3-68938e094510","Type":"ContainerDied","Data":"fc68458705d52ffe188e292effdc48adcef036c4730d9da5c5e79dd0196fceee"} Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.772649 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f18b5d57-5b05-4ef0-bae3-68938e094510","Type":"ContainerDied","Data":"a7dc563e23807f6efe79faed84ec9c2b00f86190217519d5f3838b56a30401b8"} Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.772678 4751 scope.go:117] "RemoveContainer" containerID="fc68458705d52ffe188e292effdc48adcef036c4730d9da5c5e79dd0196fceee" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.772892 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.837401 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.853519 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.873493 4751 scope.go:117] "RemoveContainer" containerID="754b224d6c0aec71e7dd9667dbd15b0273b071f0dbfebe749ccad88991070256" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.924517 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 21:44:57 crc kubenswrapper[4751]: E0130 21:44:57.941005 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18b5d57-5b05-4ef0-bae3-68938e094510" containerName="setup-container" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.941064 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18b5d57-5b05-4ef0-bae3-68938e094510" containerName="setup-container" Jan 30 21:44:57 crc kubenswrapper[4751]: E0130 21:44:57.941103 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18b5d57-5b05-4ef0-bae3-68938e094510" containerName="rabbitmq" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.941112 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18b5d57-5b05-4ef0-bae3-68938e094510" containerName="rabbitmq" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.944616 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f18b5d57-5b05-4ef0-bae3-68938e094510" containerName="rabbitmq" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.964517 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.003657 4751 scope.go:117] "RemoveContainer" containerID="fc68458705d52ffe188e292effdc48adcef036c4730d9da5c5e79dd0196fceee" Jan 30 21:44:58 crc kubenswrapper[4751]: E0130 21:44:58.006534 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc68458705d52ffe188e292effdc48adcef036c4730d9da5c5e79dd0196fceee\": container with ID starting with fc68458705d52ffe188e292effdc48adcef036c4730d9da5c5e79dd0196fceee not found: ID does not exist" containerID="fc68458705d52ffe188e292effdc48adcef036c4730d9da5c5e79dd0196fceee" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.006581 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc68458705d52ffe188e292effdc48adcef036c4730d9da5c5e79dd0196fceee"} err="failed to get container status \"fc68458705d52ffe188e292effdc48adcef036c4730d9da5c5e79dd0196fceee\": rpc error: code = NotFound desc = could not find container \"fc68458705d52ffe188e292effdc48adcef036c4730d9da5c5e79dd0196fceee\": container with ID starting with fc68458705d52ffe188e292effdc48adcef036c4730d9da5c5e79dd0196fceee not found: ID does not exist" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.006607 4751 scope.go:117] "RemoveContainer" containerID="754b224d6c0aec71e7dd9667dbd15b0273b071f0dbfebe749ccad88991070256" Jan 30 21:44:58 crc kubenswrapper[4751]: E0130 21:44:58.042515 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"754b224d6c0aec71e7dd9667dbd15b0273b071f0dbfebe749ccad88991070256\": container with ID starting with 754b224d6c0aec71e7dd9667dbd15b0273b071f0dbfebe749ccad88991070256 not found: ID does not exist" containerID="754b224d6c0aec71e7dd9667dbd15b0273b071f0dbfebe749ccad88991070256" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.042617 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"754b224d6c0aec71e7dd9667dbd15b0273b071f0dbfebe749ccad88991070256"} err="failed to get container status \"754b224d6c0aec71e7dd9667dbd15b0273b071f0dbfebe749ccad88991070256\": rpc error: code = NotFound desc = could not find container \"754b224d6c0aec71e7dd9667dbd15b0273b071f0dbfebe749ccad88991070256\": container with ID starting with 754b224d6c0aec71e7dd9667dbd15b0273b071f0dbfebe749ccad88991070256 not found: ID does not exist" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.046430 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f18b5d57-5b05-4ef0-bae3-68938e094510" path="/var/lib/kubelet/pods/f18b5d57-5b05-4ef0-bae3-68938e094510/volumes" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.047520 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.071501 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4ab0c22c-f078-413c-ac94-9e543a02c3fb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.071760 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ab0c22c-f078-413c-ac94-9e543a02c3fb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.071826 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ab0c22c-f078-413c-ac94-9e543a02c3fb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.071921 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ab0c22c-f078-413c-ac94-9e543a02c3fb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.072063 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcchl\" (UniqueName: \"kubernetes.io/projected/4ab0c22c-f078-413c-ac94-9e543a02c3fb-kube-api-access-bcchl\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.072116 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ab0c22c-f078-413c-ac94-9e543a02c3fb-config-data\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.072159 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.072216 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ab0c22c-f078-413c-ac94-9e543a02c3fb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.072257 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ab0c22c-f078-413c-ac94-9e543a02c3fb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.072280 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ab0c22c-f078-413c-ac94-9e543a02c3fb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.072348 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ab0c22c-f078-413c-ac94-9e543a02c3fb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.174688 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ab0c22c-f078-413c-ac94-9e543a02c3fb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.175064 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ab0c22c-f078-413c-ac94-9e543a02c3fb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.175170 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcchl\" (UniqueName: \"kubernetes.io/projected/4ab0c22c-f078-413c-ac94-9e543a02c3fb-kube-api-access-bcchl\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.175201 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ab0c22c-f078-413c-ac94-9e543a02c3fb-config-data\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.175240 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.175366 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ab0c22c-f078-413c-ac94-9e543a02c3fb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.175411 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ab0c22c-f078-413c-ac94-9e543a02c3fb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.175430 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ab0c22c-f078-413c-ac94-9e543a02c3fb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.175488 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ab0c22c-f078-413c-ac94-9e543a02c3fb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.175552 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4ab0c22c-f078-413c-ac94-9e543a02c3fb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.175723 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ab0c22c-f078-413c-ac94-9e543a02c3fb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.175761 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ab0c22c-f078-413c-ac94-9e543a02c3fb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.176153 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ab0c22c-f078-413c-ac94-9e543a02c3fb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.177052 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ab0c22c-f078-413c-ac94-9e543a02c3fb-config-data\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.178200 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ab0c22c-f078-413c-ac94-9e543a02c3fb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.179801 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ab0c22c-f078-413c-ac94-9e543a02c3fb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.181906 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ab0c22c-f078-413c-ac94-9e543a02c3fb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.184166 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ab0c22c-f078-413c-ac94-9e543a02c3fb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.184191 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ab0c22c-f078-413c-ac94-9e543a02c3fb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.186361 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.186457 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d4befba5b3452f215320be9365c178860d706182c1f41ab25a94828e6255d8c2/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.192213 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4ab0c22c-f078-413c-ac94-9e543a02c3fb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.199096 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcchl\" (UniqueName: \"kubernetes.io/projected/4ab0c22c-f078-413c-ac94-9e543a02c3fb-kube-api-access-bcchl\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.268984 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.363371 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.919925 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 21:44:58 crc kubenswrapper[4751]: W0130 21:44:58.928559 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ab0c22c_f078_413c_ac94_9e543a02c3fb.slice/crio-600261f7b61ee0f41837d949f8922af6fa8967574640325202cd0079522e6918 WatchSource:0}: Error finding container 600261f7b61ee0f41837d949f8922af6fa8967574640325202cd0079522e6918: Status 404 returned error can't find the container with id 600261f7b61ee0f41837d949f8922af6fa8967574640325202cd0079522e6918 Jan 30 21:44:59 crc kubenswrapper[4751]: I0130 21:44:59.794518 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4ab0c22c-f078-413c-ac94-9e543a02c3fb","Type":"ContainerStarted","Data":"600261f7b61ee0f41837d949f8922af6fa8967574640325202cd0079522e6918"} Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.195090 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc"] Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.199182 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.209649 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.209840 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.217940 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc"] Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.331297 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvzrp\" (UniqueName: \"kubernetes.io/projected/60a5fa77-b23e-417a-9854-929675be1c58-kube-api-access-vvzrp\") pod \"collect-profiles-29496825-qqpqc\" (UID: \"60a5fa77-b23e-417a-9854-929675be1c58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.331476 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60a5fa77-b23e-417a-9854-929675be1c58-secret-volume\") pod \"collect-profiles-29496825-qqpqc\" (UID: \"60a5fa77-b23e-417a-9854-929675be1c58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.331537 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60a5fa77-b23e-417a-9854-929675be1c58-config-volume\") pod \"collect-profiles-29496825-qqpqc\" (UID: \"60a5fa77-b23e-417a-9854-929675be1c58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.435854 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvzrp\" (UniqueName: \"kubernetes.io/projected/60a5fa77-b23e-417a-9854-929675be1c58-kube-api-access-vvzrp\") pod \"collect-profiles-29496825-qqpqc\" (UID: \"60a5fa77-b23e-417a-9854-929675be1c58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.435939 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60a5fa77-b23e-417a-9854-929675be1c58-secret-volume\") pod \"collect-profiles-29496825-qqpqc\" (UID: \"60a5fa77-b23e-417a-9854-929675be1c58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.435965 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60a5fa77-b23e-417a-9854-929675be1c58-config-volume\") pod \"collect-profiles-29496825-qqpqc\" (UID: \"60a5fa77-b23e-417a-9854-929675be1c58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.437050 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60a5fa77-b23e-417a-9854-929675be1c58-config-volume\") pod \"collect-profiles-29496825-qqpqc\" (UID: \"60a5fa77-b23e-417a-9854-929675be1c58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.480720 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60a5fa77-b23e-417a-9854-929675be1c58-secret-volume\") pod \"collect-profiles-29496825-qqpqc\" (UID: \"60a5fa77-b23e-417a-9854-929675be1c58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.481634 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvzrp\" (UniqueName: \"kubernetes.io/projected/60a5fa77-b23e-417a-9854-929675be1c58-kube-api-access-vvzrp\") pod \"collect-profiles-29496825-qqpqc\" (UID: \"60a5fa77-b23e-417a-9854-929675be1c58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.538362 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" Jan 30 21:45:01 crc kubenswrapper[4751]: W0130 21:45:01.016259 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60a5fa77_b23e_417a_9854_929675be1c58.slice/crio-cdff7d7d0a81f7d5dee4f6ad01cc4d679c85e0453b551c0012ae37a4dc30d48e WatchSource:0}: Error finding container cdff7d7d0a81f7d5dee4f6ad01cc4d679c85e0453b551c0012ae37a4dc30d48e: Status 404 returned error can't find the container with id cdff7d7d0a81f7d5dee4f6ad01cc4d679c85e0453b551c0012ae37a4dc30d48e Jan 30 21:45:01 crc kubenswrapper[4751]: I0130 21:45:01.019371 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc"] Jan 30 21:45:01 crc kubenswrapper[4751]: I0130 21:45:01.829006 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4ab0c22c-f078-413c-ac94-9e543a02c3fb","Type":"ContainerStarted","Data":"228bb523988a04ece3190be6ec56bedbaf8c4a0b73cde269fd8686478b71db4d"} Jan 30 21:45:01 crc kubenswrapper[4751]: I0130 21:45:01.831775 4751 generic.go:334] "Generic (PLEG): container finished" podID="60a5fa77-b23e-417a-9854-929675be1c58" containerID="a925b908937d8dd9436a4992fc297b882d7c680a8bb02a09739b64f2a561f95a" exitCode=0 Jan 30 21:45:01 crc kubenswrapper[4751]: I0130 21:45:01.831827 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" event={"ID":"60a5fa77-b23e-417a-9854-929675be1c58","Type":"ContainerDied","Data":"a925b908937d8dd9436a4992fc297b882d7c680a8bb02a09739b64f2a561f95a"} Jan 30 21:45:01 crc kubenswrapper[4751]: I0130 21:45:01.831853 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" event={"ID":"60a5fa77-b23e-417a-9854-929675be1c58","Type":"ContainerStarted","Data":"cdff7d7d0a81f7d5dee4f6ad01cc4d679c85e0453b551c0012ae37a4dc30d48e"} Jan 30 21:45:03 crc kubenswrapper[4751]: I0130 21:45:03.319245 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" Jan 30 21:45:03 crc kubenswrapper[4751]: I0130 21:45:03.406622 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60a5fa77-b23e-417a-9854-929675be1c58-config-volume\") pod \"60a5fa77-b23e-417a-9854-929675be1c58\" (UID: \"60a5fa77-b23e-417a-9854-929675be1c58\") " Jan 30 21:45:03 crc kubenswrapper[4751]: I0130 21:45:03.406962 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvzrp\" (UniqueName: \"kubernetes.io/projected/60a5fa77-b23e-417a-9854-929675be1c58-kube-api-access-vvzrp\") pod \"60a5fa77-b23e-417a-9854-929675be1c58\" (UID: \"60a5fa77-b23e-417a-9854-929675be1c58\") " Jan 30 21:45:03 crc kubenswrapper[4751]: I0130 21:45:03.407137 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60a5fa77-b23e-417a-9854-929675be1c58-secret-volume\") pod \"60a5fa77-b23e-417a-9854-929675be1c58\" (UID: \"60a5fa77-b23e-417a-9854-929675be1c58\") " Jan 30 21:45:03 crc kubenswrapper[4751]: I0130 21:45:03.407372 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60a5fa77-b23e-417a-9854-929675be1c58-config-volume" (OuterVolumeSpecName: "config-volume") pod "60a5fa77-b23e-417a-9854-929675be1c58" (UID: "60a5fa77-b23e-417a-9854-929675be1c58"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:03 crc kubenswrapper[4751]: I0130 21:45:03.408497 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60a5fa77-b23e-417a-9854-929675be1c58-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:03 crc kubenswrapper[4751]: I0130 21:45:03.414150 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60a5fa77-b23e-417a-9854-929675be1c58-kube-api-access-vvzrp" (OuterVolumeSpecName: "kube-api-access-vvzrp") pod "60a5fa77-b23e-417a-9854-929675be1c58" (UID: "60a5fa77-b23e-417a-9854-929675be1c58"). InnerVolumeSpecName "kube-api-access-vvzrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:45:03 crc kubenswrapper[4751]: I0130 21:45:03.414777 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60a5fa77-b23e-417a-9854-929675be1c58-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "60a5fa77-b23e-417a-9854-929675be1c58" (UID: "60a5fa77-b23e-417a-9854-929675be1c58"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:45:03 crc kubenswrapper[4751]: I0130 21:45:03.511875 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvzrp\" (UniqueName: \"kubernetes.io/projected/60a5fa77-b23e-417a-9854-929675be1c58-kube-api-access-vvzrp\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:03 crc kubenswrapper[4751]: I0130 21:45:03.511914 4751 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60a5fa77-b23e-417a-9854-929675be1c58-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:03 crc kubenswrapper[4751]: I0130 21:45:03.873089 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" event={"ID":"60a5fa77-b23e-417a-9854-929675be1c58","Type":"ContainerDied","Data":"cdff7d7d0a81f7d5dee4f6ad01cc4d679c85e0453b551c0012ae37a4dc30d48e"} Jan 30 21:45:03 crc kubenswrapper[4751]: I0130 21:45:03.873140 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdff7d7d0a81f7d5dee4f6ad01cc4d679c85e0453b551c0012ae37a4dc30d48e" Jan 30 21:45:03 crc kubenswrapper[4751]: I0130 21:45:03.873169 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" Jan 30 21:45:23 crc kubenswrapper[4751]: I0130 21:45:23.989054 4751 scope.go:117] "RemoveContainer" containerID="bdd03488d3195a549fc04a34aab5bd9be42fab7815eccaedf690eaba2f311d80" Jan 30 21:45:24 crc kubenswrapper[4751]: I0130 21:45:24.035740 4751 scope.go:117] "RemoveContainer" containerID="5e15084dd70b42693552f9d64b22474ea93dd026e14a53bb39cd74bd8ba86b97" Jan 30 21:45:24 crc kubenswrapper[4751]: I0130 21:45:24.067502 4751 scope.go:117] "RemoveContainer" containerID="e3064bf78a4cd92d2b24a8bdce3402cc789ce700663ceec250dcf8768aa0ad5c" Jan 30 21:45:24 crc kubenswrapper[4751]: I0130 21:45:24.121128 4751 scope.go:117] "RemoveContainer" containerID="d9b252a19e1756dc14b8604eb4ec0d16757d20c0506507f763599f15997045f8" Jan 30 21:45:24 crc kubenswrapper[4751]: I0130 21:45:24.173245 4751 scope.go:117] "RemoveContainer" containerID="d9de83cadc3b076ba912dc65301ea8bc1d6d0414a32e18815fa439a9c91d4dfb" Jan 30 21:45:33 crc kubenswrapper[4751]: I0130 21:45:33.242397 4751 generic.go:334] "Generic (PLEG): container finished" podID="4ab0c22c-f078-413c-ac94-9e543a02c3fb" containerID="228bb523988a04ece3190be6ec56bedbaf8c4a0b73cde269fd8686478b71db4d" exitCode=0 Jan 30 21:45:33 crc kubenswrapper[4751]: I0130 21:45:33.242490 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4ab0c22c-f078-413c-ac94-9e543a02c3fb","Type":"ContainerDied","Data":"228bb523988a04ece3190be6ec56bedbaf8c4a0b73cde269fd8686478b71db4d"} Jan 30 21:45:34 crc kubenswrapper[4751]: I0130 21:45:34.256754 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4ab0c22c-f078-413c-ac94-9e543a02c3fb","Type":"ContainerStarted","Data":"d96b636201e112e943d14030291559f2a14d8aca6384012b51550acf798007fd"} Jan 30 21:45:34 crc kubenswrapper[4751]: I0130 21:45:34.259972 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 21:45:34 crc kubenswrapper[4751]: I0130 21:45:34.311775 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.311726568 podStartE2EDuration="37.311726568s" podCreationTimestamp="2026-01-30 21:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:34.282396657 +0000 UTC m=+1873.028219336" watchObservedRunningTime="2026-01-30 21:45:34.311726568 +0000 UTC m=+1873.057549217" Jan 30 21:45:48 crc kubenswrapper[4751]: I0130 21:45:48.365864 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 21:46:24 crc kubenswrapper[4751]: I0130 21:46:24.292993 4751 scope.go:117] "RemoveContainer" containerID="3f3a7ffb288bc415dc5b59baa7fb9c6b68e00f52800d342ad995dbc272c7f1bb" Jan 30 21:46:24 crc kubenswrapper[4751]: I0130 21:46:24.335737 4751 scope.go:117] "RemoveContainer" containerID="cf22f8655eccc32aa59cef7b29c129725319b4f4f4da7c51fdef15993d0d2382" Jan 30 21:46:24 crc kubenswrapper[4751]: I0130 21:46:24.383733 4751 scope.go:117] "RemoveContainer" containerID="dc31a52f5646a180bf7d41d4f12f928e7c63335be084922d3c985b8fa786c23e" Jan 30 21:46:43 crc kubenswrapper[4751]: I0130 21:46:43.052904 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-mxcnd"] Jan 30 21:46:43 crc kubenswrapper[4751]: I0130 21:46:43.072042 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-mxcnd"] Jan 30 21:46:44 crc kubenswrapper[4751]: I0130 21:46:44.000055 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5" path="/var/lib/kubelet/pods/ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5/volumes" Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.077995 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-1da7-account-create-update-q9cg8"] Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.108848 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-p9lfn"] Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.126934 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-fed1-account-create-update-ztdkt"] Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.148111 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a004-account-create-update-zkpzg"] Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.180428 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-hgg7b"] Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.194383 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a004-account-create-update-zkpzg"] Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.207735 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-1da7-account-create-update-q9cg8"] Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.218971 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-fed1-account-create-update-ztdkt"] Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.231871 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-p9lfn"] Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.243386 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-hgg7b"] Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.256431 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-dd31-account-create-update-4hlqb"] Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.270850 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-dd31-account-create-update-4hlqb"] Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.281553 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-7tt6b"] Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.294712 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-7tt6b"] Jan 30 21:46:51 crc kubenswrapper[4751]: I0130 21:46:51.992022 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37f348fb-7f83-40db-98b2-7e8bc603a3e6" path="/var/lib/kubelet/pods/37f348fb-7f83-40db-98b2-7e8bc603a3e6/volumes" Jan 30 21:46:51 crc kubenswrapper[4751]: I0130 21:46:51.993080 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fbc6a33-d240-4982-ade1-668f5da8b516" path="/var/lib/kubelet/pods/4fbc6a33-d240-4982-ade1-668f5da8b516/volumes" Jan 30 21:46:51 crc kubenswrapper[4751]: I0130 21:46:51.993971 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55099194-6cb2-437d-ae0d-a08c104de380" path="/var/lib/kubelet/pods/55099194-6cb2-437d-ae0d-a08c104de380/volumes" Jan 30 21:46:51 crc kubenswrapper[4751]: I0130 21:46:51.994836 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="722402dd-bf51-47a6-b20e-85aec93527d9" path="/var/lib/kubelet/pods/722402dd-bf51-47a6-b20e-85aec93527d9/volumes" Jan 30 21:46:51 crc kubenswrapper[4751]: I0130 21:46:51.997841 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93341dcd-a293-4879-8baf-855556383780" path="/var/lib/kubelet/pods/93341dcd-a293-4879-8baf-855556383780/volumes" Jan 30 21:46:51 crc kubenswrapper[4751]: I0130 21:46:51.998665 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1373e37-3653-4f5d-9978-9d1cca4e546b" path="/var/lib/kubelet/pods/d1373e37-3653-4f5d-9978-9d1cca4e546b/volumes" Jan 30 21:46:51 crc kubenswrapper[4751]: I0130 21:46:51.999576 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2" path="/var/lib/kubelet/pods/e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2/volumes" Jan 30 21:46:54 crc kubenswrapper[4751]: I0130 21:46:54.127237 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:46:54 crc kubenswrapper[4751]: I0130 21:46:54.128033 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:46:58 crc kubenswrapper[4751]: I0130 21:46:58.040659 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-e51b-account-create-update-bskb2"] Jan 30 21:46:58 crc kubenswrapper[4751]: I0130 21:46:58.059479 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-29gtt"] Jan 30 21:46:58 crc kubenswrapper[4751]: I0130 21:46:58.074075 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-29gtt"] Jan 30 21:46:58 crc kubenswrapper[4751]: I0130 21:46:58.088603 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-e51b-account-create-update-bskb2"] Jan 30 21:46:59 crc kubenswrapper[4751]: I0130 21:46:59.991319 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f2ec939-8595-4611-a636-a46fffaa8ebf" path="/var/lib/kubelet/pods/1f2ec939-8595-4611-a636-a46fffaa8ebf/volumes" Jan 30 21:46:59 crc kubenswrapper[4751]: I0130 21:46:59.992807 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d36824ca-c5a8-4514-9276-e49126a66018" path="/var/lib/kubelet/pods/d36824ca-c5a8-4514-9276-e49126a66018/volumes" Jan 30 21:47:00 crc kubenswrapper[4751]: I0130 21:47:00.028499 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2xmqz"] Jan 30 21:47:00 crc kubenswrapper[4751]: I0130 21:47:00.043773 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-2xmqz"] Jan 30 21:47:01 crc kubenswrapper[4751]: I0130 21:47:01.992247 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27ed4323-ecba-4f90-b7ea-a5a0ff7713d6" path="/var/lib/kubelet/pods/27ed4323-ecba-4f90-b7ea-a5a0ff7713d6/volumes" Jan 30 21:47:21 crc kubenswrapper[4751]: I0130 21:47:21.050521 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-r8wrn"] Jan 30 21:47:21 crc kubenswrapper[4751]: I0130 21:47:21.066133 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-r8wrn"] Jan 30 21:47:21 crc kubenswrapper[4751]: I0130 21:47:21.993149 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32a93444-0221-40b7-9869-428788112ae2" path="/var/lib/kubelet/pods/32a93444-0221-40b7-9869-428788112ae2/volumes" Jan 30 21:47:24 crc kubenswrapper[4751]: I0130 21:47:24.126400 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:47:24 crc kubenswrapper[4751]: I0130 21:47:24.126479 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:47:24 crc kubenswrapper[4751]: I0130 21:47:24.520702 4751 scope.go:117] "RemoveContainer" containerID="c18d43f25fad540cc4b6980ee198b0b5113db4829b6825bf308264ef91e01601" Jan 30 21:47:24 crc kubenswrapper[4751]: I0130 21:47:24.569307 4751 scope.go:117] "RemoveContainer" containerID="0309515ef606ff55b4a18e80ad5013912740e27a1539e1146826375f54a2b553" Jan 30 21:47:24 crc kubenswrapper[4751]: I0130 21:47:24.647412 4751 scope.go:117] "RemoveContainer" containerID="327aabac3be4ee9fde091b36b1b374aaf9d59f04f57b4504442450704eca0e64" Jan 30 21:47:24 crc kubenswrapper[4751]: I0130 21:47:24.702247 4751 scope.go:117] "RemoveContainer" containerID="a4c6ded6bcebfebc69538cc39c7d557a4894403ba3b9a46406bf7a54b2fb9107" Jan 30 21:47:24 crc kubenswrapper[4751]: I0130 21:47:24.746590 4751 scope.go:117] "RemoveContainer" containerID="47ae5041500feb907ed9d9736f2e4bbce3e444b85130301585ffd13ba081d9a9" Jan 30 21:47:24 crc kubenswrapper[4751]: I0130 21:47:24.807397 4751 scope.go:117] "RemoveContainer" containerID="73515e94e6f7a825d6c9ac37458f6d6de21c87de5edaea4b69d38594e2145bf0" Jan 30 21:47:24 crc kubenswrapper[4751]: I0130 21:47:24.875372 4751 scope.go:117] "RemoveContainer" containerID="291bba4a83a01e50f5b8260a72f7589443ccaf7ad2482ecfd294e283e08c6b24" Jan 30 21:47:24 crc kubenswrapper[4751]: I0130 21:47:24.907263 4751 scope.go:117] "RemoveContainer" containerID="3d408c3254750e92d426d8cded49880995124a210d5a1b2ed7f46112cc91e938" Jan 30 21:47:24 crc kubenswrapper[4751]: I0130 21:47:24.934042 4751 scope.go:117] "RemoveContainer" containerID="1b5908039e6b19df93f09b06f432ee6033fa0e6a44029f167f2bd610adfb389f" Jan 30 21:47:24 crc kubenswrapper[4751]: I0130 21:47:24.956440 4751 scope.go:117] "RemoveContainer" containerID="d6a6c7d319a747790016da0d8bf07d4ab98c3d010eb7ce4cdc966d03c722da28" Jan 30 21:47:24 crc kubenswrapper[4751]: I0130 21:47:24.985507 4751 scope.go:117] "RemoveContainer" containerID="0b6fa7471c097e1e891323221bf11b4c99ace89cd782cbdf349cf6bb9189e783" Jan 30 21:47:25 crc kubenswrapper[4751]: I0130 21:47:25.009700 4751 scope.go:117] "RemoveContainer" containerID="6d875e7e116aae53e17490ae6ad2fcbb1e85d4c6ca0051daa64edc6f242dd628" Jan 30 21:47:25 crc kubenswrapper[4751]: I0130 21:47:25.032999 4751 scope.go:117] "RemoveContainer" containerID="963c152112c095b417af6d89f95dac5ff1eb3a21950942a6d257f3fa15a08da7" Jan 30 21:47:30 crc kubenswrapper[4751]: I0130 21:47:30.603185 4751 generic.go:334] "Generic (PLEG): container finished" podID="25d1f8e8-75ed-46ae-b674-87f34c4edbfa" containerID="5fb425b25c8902fe60e5dcd58df1f879542305f303c7a43c344cbd78332f0ba4" exitCode=0 Jan 30 21:47:30 crc kubenswrapper[4751]: I0130 21:47:30.603227 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" event={"ID":"25d1f8e8-75ed-46ae-b674-87f34c4edbfa","Type":"ContainerDied","Data":"5fb425b25c8902fe60e5dcd58df1f879542305f303c7a43c344cbd78332f0ba4"} Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.040585 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-0f07-account-create-update-fr6kw"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.063452 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-zhgsw"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.081698 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-zhgsw"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.092548 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-0f07-account-create-update-fr6kw"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.103594 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-31bb-account-create-update-w6h5f"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.114403 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-2618-account-create-update-fdl95"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.124065 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-2618-account-create-update-fdl95"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.132908 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-31bb-account-create-update-w6h5f"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.142856 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-hr9lv"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.153028 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-2gxmh"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.176252 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-f7f7-account-create-update-d88cz"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.187496 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-hr9lv"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.198953 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-f7f7-account-create-update-d88cz"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.210147 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-2gxmh"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.220356 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-lqv47"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.230502 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-lqv47"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.990146 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00437219-cb6b-48ad-a0cb-d75b82412ba1" path="/var/lib/kubelet/pods/00437219-cb6b-48ad-a0cb-d75b82412ba1/volumes" Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.991030 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0297c6e3-62f8-49cc-a073-8bb104949456" path="/var/lib/kubelet/pods/0297c6e3-62f8-49cc-a073-8bb104949456/volumes" Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.991818 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="056813ab-3913-42db-afa1-a79cb8e3a3c9" path="/var/lib/kubelet/pods/056813ab-3913-42db-afa1-a79cb8e3a3c9/volumes" Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.992616 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b9f9eed-02b1-4541-8ebb-34826639233b" path="/var/lib/kubelet/pods/3b9f9eed-02b1-4541-8ebb-34826639233b/volumes" Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.994036 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c" path="/var/lib/kubelet/pods/5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c/volumes" Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.994885 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9112f9c-911e-47d4-be64-e6f90fa6fa35" path="/var/lib/kubelet/pods/a9112f9c-911e-47d4-be64-e6f90fa6fa35/volumes" Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.995980 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf1f702d-7084-4e85-add9-15c10223d801" path="/var/lib/kubelet/pods/bf1f702d-7084-4e85-add9-15c10223d801/volumes" Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.997671 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e63e6079-6772-46c3-9ec3-1e01741a210f" path="/var/lib/kubelet/pods/e63e6079-6772-46c3-9ec3-1e01741a210f/volumes" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.099719 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.303199 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6xjb\" (UniqueName: \"kubernetes.io/projected/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-kube-api-access-x6xjb\") pod \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.303348 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-ssh-key-openstack-edpm-ipam\") pod \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.303494 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-inventory\") pod \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.303551 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-bootstrap-combined-ca-bundle\") pod \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.309405 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "25d1f8e8-75ed-46ae-b674-87f34c4edbfa" (UID: "25d1f8e8-75ed-46ae-b674-87f34c4edbfa"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.310659 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-kube-api-access-x6xjb" (OuterVolumeSpecName: "kube-api-access-x6xjb") pod "25d1f8e8-75ed-46ae-b674-87f34c4edbfa" (UID: "25d1f8e8-75ed-46ae-b674-87f34c4edbfa"). InnerVolumeSpecName "kube-api-access-x6xjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.335276 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "25d1f8e8-75ed-46ae-b674-87f34c4edbfa" (UID: "25d1f8e8-75ed-46ae-b674-87f34c4edbfa"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.345559 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-inventory" (OuterVolumeSpecName: "inventory") pod "25d1f8e8-75ed-46ae-b674-87f34c4edbfa" (UID: "25d1f8e8-75ed-46ae-b674-87f34c4edbfa"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.406920 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.406993 4751 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.407021 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6xjb\" (UniqueName: \"kubernetes.io/projected/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-kube-api-access-x6xjb\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.407046 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.637915 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" event={"ID":"25d1f8e8-75ed-46ae-b674-87f34c4edbfa","Type":"ContainerDied","Data":"51f8374b96e74508b8ea161ccefb7ec2d95c6112bacdf4605ef5155ad9ff2a2e"} Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.638040 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51f8374b96e74508b8ea161ccefb7ec2d95c6112bacdf4605ef5155ad9ff2a2e" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.638789 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.732224 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p"] Jan 30 21:47:32 crc kubenswrapper[4751]: E0130 21:47:32.732963 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60a5fa77-b23e-417a-9854-929675be1c58" containerName="collect-profiles" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.732997 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="60a5fa77-b23e-417a-9854-929675be1c58" containerName="collect-profiles" Jan 30 21:47:32 crc kubenswrapper[4751]: E0130 21:47:32.733030 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d1f8e8-75ed-46ae-b674-87f34c4edbfa" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.733041 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d1f8e8-75ed-46ae-b674-87f34c4edbfa" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.733249 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="60a5fa77-b23e-417a-9854-929675be1c58" containerName="collect-profiles" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.733269 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="25d1f8e8-75ed-46ae-b674-87f34c4edbfa" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.734331 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.738768 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.739074 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.739130 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.739495 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.749898 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p"] Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.922891 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wz8d\" (UniqueName: \"kubernetes.io/projected/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-kube-api-access-6wz8d\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-czw8p\" (UID: \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.923009 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-czw8p\" (UID: \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.923175 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-czw8p\" (UID: \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" Jan 30 21:47:33 crc kubenswrapper[4751]: I0130 21:47:33.025884 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-czw8p\" (UID: \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" Jan 30 21:47:33 crc kubenswrapper[4751]: I0130 21:47:33.026099 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wz8d\" (UniqueName: \"kubernetes.io/projected/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-kube-api-access-6wz8d\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-czw8p\" (UID: \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" Jan 30 21:47:33 crc kubenswrapper[4751]: I0130 21:47:33.026297 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-czw8p\" (UID: \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" Jan 30 21:47:33 crc kubenswrapper[4751]: I0130 21:47:33.030577 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-czw8p\" (UID: \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" Jan 30 21:47:33 crc kubenswrapper[4751]: I0130 21:47:33.042629 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-czw8p\" (UID: \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" Jan 30 21:47:33 crc kubenswrapper[4751]: I0130 21:47:33.042962 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wz8d\" (UniqueName: \"kubernetes.io/projected/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-kube-api-access-6wz8d\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-czw8p\" (UID: \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" Jan 30 21:47:33 crc kubenswrapper[4751]: I0130 21:47:33.054281 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" Jan 30 21:47:33 crc kubenswrapper[4751]: I0130 21:47:33.621408 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p"] Jan 30 21:47:33 crc kubenswrapper[4751]: I0130 21:47:33.627929 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 21:47:33 crc kubenswrapper[4751]: I0130 21:47:33.655260 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" event={"ID":"b45d4d88-6b91-4bfc-9619-68fdb7d90f05","Type":"ContainerStarted","Data":"871112c40c834fc0ee43096e7b64ddfc12d71cae78e7a64ab8d6c06bfe6ebb40"} Jan 30 21:47:35 crc kubenswrapper[4751]: I0130 21:47:35.681797 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" event={"ID":"b45d4d88-6b91-4bfc-9619-68fdb7d90f05","Type":"ContainerStarted","Data":"c30ac0e8be6c0815dba51db8d797c69e9ca710b0af1c400a4c5b3a4c953e6ebf"} Jan 30 21:47:35 crc kubenswrapper[4751]: I0130 21:47:35.700629 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" podStartSLOduration=2.7271200909999997 podStartE2EDuration="3.700613936s" podCreationTimestamp="2026-01-30 21:47:32 +0000 UTC" firstStartedPulling="2026-01-30 21:47:33.62774096 +0000 UTC m=+1992.373563609" lastFinishedPulling="2026-01-30 21:47:34.601234795 +0000 UTC m=+1993.347057454" observedRunningTime="2026-01-30 21:47:35.699729232 +0000 UTC m=+1994.445551901" watchObservedRunningTime="2026-01-30 21:47:35.700613936 +0000 UTC m=+1994.446436585" Jan 30 21:47:37 crc kubenswrapper[4751]: I0130 21:47:37.040317 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-z99cv"] Jan 30 21:47:37 crc kubenswrapper[4751]: I0130 21:47:37.049281 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-z99cv"] Jan 30 21:47:37 crc kubenswrapper[4751]: I0130 21:47:37.999052 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bc5d80d-ae17-431d-8e0f-6003af0fa6b1" path="/var/lib/kubelet/pods/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1/volumes" Jan 30 21:47:54 crc kubenswrapper[4751]: I0130 21:47:54.127061 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:47:54 crc kubenswrapper[4751]: I0130 21:47:54.127758 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:47:54 crc kubenswrapper[4751]: I0130 21:47:54.127814 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:47:54 crc kubenswrapper[4751]: I0130 21:47:54.128730 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e83ca35bd085af955b4b3e0476bcb9169304b85473995bcb3f76de779bdcffb0"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:47:54 crc kubenswrapper[4751]: I0130 21:47:54.128799 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://e83ca35bd085af955b4b3e0476bcb9169304b85473995bcb3f76de779bdcffb0" gracePeriod=600 Jan 30 21:47:54 crc kubenswrapper[4751]: I0130 21:47:54.906406 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="e83ca35bd085af955b4b3e0476bcb9169304b85473995bcb3f76de779bdcffb0" exitCode=0 Jan 30 21:47:54 crc kubenswrapper[4751]: I0130 21:47:54.906608 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"e83ca35bd085af955b4b3e0476bcb9169304b85473995bcb3f76de779bdcffb0"} Jan 30 21:47:54 crc kubenswrapper[4751]: I0130 21:47:54.906972 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9"} Jan 30 21:47:54 crc kubenswrapper[4751]: I0130 21:47:54.906997 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:48:08 crc kubenswrapper[4751]: I0130 21:48:08.353000 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c4tsm"] Jan 30 21:48:08 crc kubenswrapper[4751]: I0130 21:48:08.356731 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:08 crc kubenswrapper[4751]: I0130 21:48:08.369724 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c4tsm"] Jan 30 21:48:08 crc kubenswrapper[4751]: I0130 21:48:08.556401 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/329634a4-1673-4657-a0fb-bbf17bfc55c7-catalog-content\") pod \"certified-operators-c4tsm\" (UID: \"329634a4-1673-4657-a0fb-bbf17bfc55c7\") " pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:08 crc kubenswrapper[4751]: I0130 21:48:08.556577 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd858\" (UniqueName: \"kubernetes.io/projected/329634a4-1673-4657-a0fb-bbf17bfc55c7-kube-api-access-kd858\") pod \"certified-operators-c4tsm\" (UID: \"329634a4-1673-4657-a0fb-bbf17bfc55c7\") " pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:08 crc kubenswrapper[4751]: I0130 21:48:08.556623 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/329634a4-1673-4657-a0fb-bbf17bfc55c7-utilities\") pod \"certified-operators-c4tsm\" (UID: \"329634a4-1673-4657-a0fb-bbf17bfc55c7\") " pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:08 crc kubenswrapper[4751]: I0130 21:48:08.659175 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd858\" (UniqueName: \"kubernetes.io/projected/329634a4-1673-4657-a0fb-bbf17bfc55c7-kube-api-access-kd858\") pod \"certified-operators-c4tsm\" (UID: \"329634a4-1673-4657-a0fb-bbf17bfc55c7\") " pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:08 crc kubenswrapper[4751]: I0130 21:48:08.659257 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/329634a4-1673-4657-a0fb-bbf17bfc55c7-utilities\") pod \"certified-operators-c4tsm\" (UID: \"329634a4-1673-4657-a0fb-bbf17bfc55c7\") " pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:08 crc kubenswrapper[4751]: I0130 21:48:08.659404 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/329634a4-1673-4657-a0fb-bbf17bfc55c7-catalog-content\") pod \"certified-operators-c4tsm\" (UID: \"329634a4-1673-4657-a0fb-bbf17bfc55c7\") " pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:08 crc kubenswrapper[4751]: I0130 21:48:08.659915 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/329634a4-1673-4657-a0fb-bbf17bfc55c7-catalog-content\") pod \"certified-operators-c4tsm\" (UID: \"329634a4-1673-4657-a0fb-bbf17bfc55c7\") " pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:08 crc kubenswrapper[4751]: I0130 21:48:08.660031 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/329634a4-1673-4657-a0fb-bbf17bfc55c7-utilities\") pod \"certified-operators-c4tsm\" (UID: \"329634a4-1673-4657-a0fb-bbf17bfc55c7\") " pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:08 crc kubenswrapper[4751]: I0130 21:48:08.699272 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd858\" (UniqueName: \"kubernetes.io/projected/329634a4-1673-4657-a0fb-bbf17bfc55c7-kube-api-access-kd858\") pod \"certified-operators-c4tsm\" (UID: \"329634a4-1673-4657-a0fb-bbf17bfc55c7\") " pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:08 crc kubenswrapper[4751]: I0130 21:48:08.991865 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:09 crc kubenswrapper[4751]: I0130 21:48:09.514277 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c4tsm"] Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.082237 4751 generic.go:334] "Generic (PLEG): container finished" podID="329634a4-1673-4657-a0fb-bbf17bfc55c7" containerID="a39d56e573ead34e5ea4a73cdb3f1ebb2184c4cba2db499a09a43670ec878f5c" exitCode=0 Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.082292 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4tsm" event={"ID":"329634a4-1673-4657-a0fb-bbf17bfc55c7","Type":"ContainerDied","Data":"a39d56e573ead34e5ea4a73cdb3f1ebb2184c4cba2db499a09a43670ec878f5c"} Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.082621 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4tsm" event={"ID":"329634a4-1673-4657-a0fb-bbf17bfc55c7","Type":"ContainerStarted","Data":"7e9e07c3f9270f65741e77482a4cd10ab663916efa79256ff1798d504bbcfbbc"} Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.692535 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6xw94"] Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.695445 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.714143 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6xw94"] Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.730595 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-utilities\") pod \"redhat-operators-6xw94\" (UID: \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\") " pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.730895 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-catalog-content\") pod \"redhat-operators-6xw94\" (UID: \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\") " pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.731005 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj8c7\" (UniqueName: \"kubernetes.io/projected/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-kube-api-access-kj8c7\") pod \"redhat-operators-6xw94\" (UID: \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\") " pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.832836 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-utilities\") pod \"redhat-operators-6xw94\" (UID: \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\") " pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.832948 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-catalog-content\") pod \"redhat-operators-6xw94\" (UID: \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\") " pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.832990 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj8c7\" (UniqueName: \"kubernetes.io/projected/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-kube-api-access-kj8c7\") pod \"redhat-operators-6xw94\" (UID: \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\") " pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.834050 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-catalog-content\") pod \"redhat-operators-6xw94\" (UID: \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\") " pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.834075 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-utilities\") pod \"redhat-operators-6xw94\" (UID: \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\") " pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.854049 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj8c7\" (UniqueName: \"kubernetes.io/projected/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-kube-api-access-kj8c7\") pod \"redhat-operators-6xw94\" (UID: \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\") " pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.040252 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.048700 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-lwm4t"] Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.062042 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-lwm4t"] Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.097308 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4tsm" event={"ID":"329634a4-1673-4657-a0fb-bbf17bfc55c7","Type":"ContainerStarted","Data":"79084a63ba3f9f58d27216106e2c74ff7f3f05d66c94f99b66213d22feca3240"} Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.299198 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8jb6x"] Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.303564 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.320639 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8jb6x"] Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.448687 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcjxk\" (UniqueName: \"kubernetes.io/projected/602e9b0f-e15c-4855-a0e5-942f2f37f030-kube-api-access-bcjxk\") pod \"community-operators-8jb6x\" (UID: \"602e9b0f-e15c-4855-a0e5-942f2f37f030\") " pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.449032 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602e9b0f-e15c-4855-a0e5-942f2f37f030-utilities\") pod \"community-operators-8jb6x\" (UID: \"602e9b0f-e15c-4855-a0e5-942f2f37f030\") " pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.449373 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602e9b0f-e15c-4855-a0e5-942f2f37f030-catalog-content\") pod \"community-operators-8jb6x\" (UID: \"602e9b0f-e15c-4855-a0e5-942f2f37f030\") " pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.550998 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602e9b0f-e15c-4855-a0e5-942f2f37f030-catalog-content\") pod \"community-operators-8jb6x\" (UID: \"602e9b0f-e15c-4855-a0e5-942f2f37f030\") " pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.551468 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcjxk\" (UniqueName: \"kubernetes.io/projected/602e9b0f-e15c-4855-a0e5-942f2f37f030-kube-api-access-bcjxk\") pod \"community-operators-8jb6x\" (UID: \"602e9b0f-e15c-4855-a0e5-942f2f37f030\") " pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.551601 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602e9b0f-e15c-4855-a0e5-942f2f37f030-catalog-content\") pod \"community-operators-8jb6x\" (UID: \"602e9b0f-e15c-4855-a0e5-942f2f37f030\") " pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.551612 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602e9b0f-e15c-4855-a0e5-942f2f37f030-utilities\") pod \"community-operators-8jb6x\" (UID: \"602e9b0f-e15c-4855-a0e5-942f2f37f030\") " pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.551965 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602e9b0f-e15c-4855-a0e5-942f2f37f030-utilities\") pod \"community-operators-8jb6x\" (UID: \"602e9b0f-e15c-4855-a0e5-942f2f37f030\") " pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.555046 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6xw94"] Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.580263 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcjxk\" (UniqueName: \"kubernetes.io/projected/602e9b0f-e15c-4855-a0e5-942f2f37f030-kube-api-access-bcjxk\") pod \"community-operators-8jb6x\" (UID: \"602e9b0f-e15c-4855-a0e5-942f2f37f030\") " pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.636679 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.997749 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d42b4031-ca3e-4b28-b62a-eb346132dc3a" path="/var/lib/kubelet/pods/d42b4031-ca3e-4b28-b62a-eb346132dc3a/volumes" Jan 30 21:48:12 crc kubenswrapper[4751]: I0130 21:48:12.109272 4751 generic.go:334] "Generic (PLEG): container finished" podID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerID="f98f144d80d133e424f24069c280712338540cdd5d832e0eafae5a98e859041c" exitCode=0 Jan 30 21:48:12 crc kubenswrapper[4751]: I0130 21:48:12.111310 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xw94" event={"ID":"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813","Type":"ContainerDied","Data":"f98f144d80d133e424f24069c280712338540cdd5d832e0eafae5a98e859041c"} Jan 30 21:48:12 crc kubenswrapper[4751]: I0130 21:48:12.111389 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xw94" event={"ID":"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813","Type":"ContainerStarted","Data":"e20468a60c3ca009f382cf700c3f7365eee5860b0272834ef0e7ae4dfae413e1"} Jan 30 21:48:12 crc kubenswrapper[4751]: I0130 21:48:12.209309 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8jb6x"] Jan 30 21:48:12 crc kubenswrapper[4751]: W0130 21:48:12.214971 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod602e9b0f_e15c_4855_a0e5_942f2f37f030.slice/crio-8552e2294dac82fcbf0637ad97d4993ccc8c29e4682eee817b2f69ea8c621a0b WatchSource:0}: Error finding container 8552e2294dac82fcbf0637ad97d4993ccc8c29e4682eee817b2f69ea8c621a0b: Status 404 returned error can't find the container with id 8552e2294dac82fcbf0637ad97d4993ccc8c29e4682eee817b2f69ea8c621a0b Jan 30 21:48:13 crc kubenswrapper[4751]: I0130 21:48:13.122344 4751 generic.go:334] "Generic (PLEG): container finished" podID="602e9b0f-e15c-4855-a0e5-942f2f37f030" containerID="c10dff2ac0b8ab462c8bb2c1562ed0a3fc23fecfc917b1232799551324667659" exitCode=0 Jan 30 21:48:13 crc kubenswrapper[4751]: I0130 21:48:13.122731 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jb6x" event={"ID":"602e9b0f-e15c-4855-a0e5-942f2f37f030","Type":"ContainerDied","Data":"c10dff2ac0b8ab462c8bb2c1562ed0a3fc23fecfc917b1232799551324667659"} Jan 30 21:48:13 crc kubenswrapper[4751]: I0130 21:48:13.122756 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jb6x" event={"ID":"602e9b0f-e15c-4855-a0e5-942f2f37f030","Type":"ContainerStarted","Data":"8552e2294dac82fcbf0637ad97d4993ccc8c29e4682eee817b2f69ea8c621a0b"} Jan 30 21:48:14 crc kubenswrapper[4751]: I0130 21:48:14.137969 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xw94" event={"ID":"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813","Type":"ContainerStarted","Data":"9a9c4e80a3a6874da45849277058a1a0698619fd82837be7ed63ab033aaeb0fb"} Jan 30 21:48:14 crc kubenswrapper[4751]: I0130 21:48:14.141549 4751 generic.go:334] "Generic (PLEG): container finished" podID="329634a4-1673-4657-a0fb-bbf17bfc55c7" containerID="79084a63ba3f9f58d27216106e2c74ff7f3f05d66c94f99b66213d22feca3240" exitCode=0 Jan 30 21:48:14 crc kubenswrapper[4751]: I0130 21:48:14.141608 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4tsm" event={"ID":"329634a4-1673-4657-a0fb-bbf17bfc55c7","Type":"ContainerDied","Data":"79084a63ba3f9f58d27216106e2c74ff7f3f05d66c94f99b66213d22feca3240"} Jan 30 21:48:15 crc kubenswrapper[4751]: I0130 21:48:15.154848 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jb6x" event={"ID":"602e9b0f-e15c-4855-a0e5-942f2f37f030","Type":"ContainerStarted","Data":"8b583759b6badc88d1b33fd7f2ce5ded4fedddafaece7ba7029511f4fae80f0f"} Jan 30 21:48:16 crc kubenswrapper[4751]: I0130 21:48:16.169484 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4tsm" event={"ID":"329634a4-1673-4657-a0fb-bbf17bfc55c7","Type":"ContainerStarted","Data":"d3ca4235fe0b6e079743a9f3e8a970c463b6fb7d8c2a61a5085e86cba4563b8b"} Jan 30 21:48:16 crc kubenswrapper[4751]: I0130 21:48:16.190253 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c4tsm" podStartSLOduration=3.450087365 podStartE2EDuration="8.190232213s" podCreationTimestamp="2026-01-30 21:48:08 +0000 UTC" firstStartedPulling="2026-01-30 21:48:10.085212299 +0000 UTC m=+2028.831034958" lastFinishedPulling="2026-01-30 21:48:14.825357147 +0000 UTC m=+2033.571179806" observedRunningTime="2026-01-30 21:48:16.189217205 +0000 UTC m=+2034.935039864" watchObservedRunningTime="2026-01-30 21:48:16.190232213 +0000 UTC m=+2034.936054862" Jan 30 21:48:18 crc kubenswrapper[4751]: I0130 21:48:18.992910 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:18 crc kubenswrapper[4751]: I0130 21:48:18.993463 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:19 crc kubenswrapper[4751]: I0130 21:48:19.207107 4751 generic.go:334] "Generic (PLEG): container finished" podID="602e9b0f-e15c-4855-a0e5-942f2f37f030" containerID="8b583759b6badc88d1b33fd7f2ce5ded4fedddafaece7ba7029511f4fae80f0f" exitCode=0 Jan 30 21:48:19 crc kubenswrapper[4751]: I0130 21:48:19.207152 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jb6x" event={"ID":"602e9b0f-e15c-4855-a0e5-942f2f37f030","Type":"ContainerDied","Data":"8b583759b6badc88d1b33fd7f2ce5ded4fedddafaece7ba7029511f4fae80f0f"} Jan 30 21:48:20 crc kubenswrapper[4751]: I0130 21:48:20.047259 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-c4tsm" podUID="329634a4-1673-4657-a0fb-bbf17bfc55c7" containerName="registry-server" probeResult="failure" output=< Jan 30 21:48:20 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:48:20 crc kubenswrapper[4751]: > Jan 30 21:48:20 crc kubenswrapper[4751]: I0130 21:48:20.220831 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jb6x" event={"ID":"602e9b0f-e15c-4855-a0e5-942f2f37f030","Type":"ContainerStarted","Data":"2bbc9843de3af408ee3ac6f7e6ea53921b34758a2a248bc31fef67534e126d3d"} Jan 30 21:48:20 crc kubenswrapper[4751]: I0130 21:48:20.243745 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8jb6x" podStartSLOduration=2.735396521 podStartE2EDuration="9.243726599s" podCreationTimestamp="2026-01-30 21:48:11 +0000 UTC" firstStartedPulling="2026-01-30 21:48:13.125770103 +0000 UTC m=+2031.871592752" lastFinishedPulling="2026-01-30 21:48:19.634100181 +0000 UTC m=+2038.379922830" observedRunningTime="2026-01-30 21:48:20.241130519 +0000 UTC m=+2038.986953188" watchObservedRunningTime="2026-01-30 21:48:20.243726599 +0000 UTC m=+2038.989549248" Jan 30 21:48:21 crc kubenswrapper[4751]: I0130 21:48:21.636995 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:21 crc kubenswrapper[4751]: I0130 21:48:21.637042 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:22 crc kubenswrapper[4751]: I0130 21:48:22.242471 4751 generic.go:334] "Generic (PLEG): container finished" podID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerID="9a9c4e80a3a6874da45849277058a1a0698619fd82837be7ed63ab033aaeb0fb" exitCode=0 Jan 30 21:48:22 crc kubenswrapper[4751]: I0130 21:48:22.242537 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xw94" event={"ID":"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813","Type":"ContainerDied","Data":"9a9c4e80a3a6874da45849277058a1a0698619fd82837be7ed63ab033aaeb0fb"} Jan 30 21:48:22 crc kubenswrapper[4751]: I0130 21:48:22.707646 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8jb6x" podUID="602e9b0f-e15c-4855-a0e5-942f2f37f030" containerName="registry-server" probeResult="failure" output=< Jan 30 21:48:22 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:48:22 crc kubenswrapper[4751]: > Jan 30 21:48:23 crc kubenswrapper[4751]: I0130 21:48:23.257622 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xw94" event={"ID":"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813","Type":"ContainerStarted","Data":"af46ee9eb151afcf0c3abe6e22364af64013d0a98fb7de81278f20213eec90a0"} Jan 30 21:48:23 crc kubenswrapper[4751]: I0130 21:48:23.282459 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6xw94" podStartSLOduration=2.693903216 podStartE2EDuration="13.282442722s" podCreationTimestamp="2026-01-30 21:48:10 +0000 UTC" firstStartedPulling="2026-01-30 21:48:12.113092258 +0000 UTC m=+2030.858914907" lastFinishedPulling="2026-01-30 21:48:22.701631764 +0000 UTC m=+2041.447454413" observedRunningTime="2026-01-30 21:48:23.280539081 +0000 UTC m=+2042.026361730" watchObservedRunningTime="2026-01-30 21:48:23.282442722 +0000 UTC m=+2042.028265371" Jan 30 21:48:24 crc kubenswrapper[4751]: I0130 21:48:24.038764 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-v9spg"] Jan 30 21:48:24 crc kubenswrapper[4751]: I0130 21:48:24.060629 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-v9spg"] Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.041217 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-89chj"] Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.055524 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-rt7v2"] Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.071102 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-89chj"] Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.083291 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-rt7v2"] Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.331191 4751 scope.go:117] "RemoveContainer" containerID="dde509ef6f207cc2bcc76a35805e737a06489616a2c06460edf270c4d46949ff" Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.361182 4751 scope.go:117] "RemoveContainer" containerID="c5ab688f9b8e1fb82010bd34dac14cc2f514cc43545c635a532a50efe0bee3a6" Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.433629 4751 scope.go:117] "RemoveContainer" containerID="86dc09eda61ac7de53bc29716e31ede7719959b2e5920e15b3c99ca75f4be060" Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.500292 4751 scope.go:117] "RemoveContainer" containerID="5e236b245c56a064616f5c0cfe68da26d9003a62ee339d2b96a7cc68c86cbcf4" Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.560947 4751 scope.go:117] "RemoveContainer" containerID="757678878d640ed42bebe096fefc08e81d2dc4fdaa39596d495dfc07a6e988a4" Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.624622 4751 scope.go:117] "RemoveContainer" containerID="51523e8d2edcc2046cb1a83c98d7a2fbd7964b697b149674907f1751f57faefe" Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.686598 4751 scope.go:117] "RemoveContainer" containerID="9b73d59359bfb3a5bef8ccdbc1b9174270c6e66e22c29e992c6a512a45cd76ed" Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.717081 4751 scope.go:117] "RemoveContainer" containerID="3f232e6698625cc60ad1770425a3662d4b2453997f82a2581cabc9a30c379df0" Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.742431 4751 scope.go:117] "RemoveContainer" containerID="081f6f9ef52a04848276eea3741fadf9bc134d70d5112e179f163c9ecb46984e" Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.767281 4751 scope.go:117] "RemoveContainer" containerID="dbe7739dccd34474fee5592432c44f2757e5e43cc8cb53f953f6011cf0eab9eb" Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.991739 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8555e0d7-6d06-4edb-b463-86f7bf829949" path="/var/lib/kubelet/pods/8555e0d7-6d06-4edb-b463-86f7bf829949/volumes" Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.992778 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a90f6a78-a996-49f8-a567-d2699c737d1f" path="/var/lib/kubelet/pods/a90f6a78-a996-49f8-a567-d2699c737d1f/volumes" Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.993474 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4c714e3-2147-4f8a-97cd-2e62e0f3a955" path="/var/lib/kubelet/pods/b4c714e3-2147-4f8a-97cd-2e62e0f3a955/volumes" Jan 30 21:48:30 crc kubenswrapper[4751]: I0130 21:48:30.046624 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-c4tsm" podUID="329634a4-1673-4657-a0fb-bbf17bfc55c7" containerName="registry-server" probeResult="failure" output=< Jan 30 21:48:30 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:48:30 crc kubenswrapper[4751]: > Jan 30 21:48:31 crc kubenswrapper[4751]: I0130 21:48:31.041486 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:48:31 crc kubenswrapper[4751]: I0130 21:48:31.041901 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:48:31 crc kubenswrapper[4751]: I0130 21:48:31.687924 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:31 crc kubenswrapper[4751]: I0130 21:48:31.737182 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:31 crc kubenswrapper[4751]: I0130 21:48:31.931438 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8jb6x"] Jan 30 21:48:32 crc kubenswrapper[4751]: I0130 21:48:32.090784 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6xw94" podUID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerName="registry-server" probeResult="failure" output=< Jan 30 21:48:32 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:48:32 crc kubenswrapper[4751]: > Jan 30 21:48:33 crc kubenswrapper[4751]: I0130 21:48:33.349500 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8jb6x" podUID="602e9b0f-e15c-4855-a0e5-942f2f37f030" containerName="registry-server" containerID="cri-o://2bbc9843de3af408ee3ac6f7e6ea53921b34758a2a248bc31fef67534e126d3d" gracePeriod=2 Jan 30 21:48:33 crc kubenswrapper[4751]: I0130 21:48:33.931097 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.031431 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602e9b0f-e15c-4855-a0e5-942f2f37f030-catalog-content\") pod \"602e9b0f-e15c-4855-a0e5-942f2f37f030\" (UID: \"602e9b0f-e15c-4855-a0e5-942f2f37f030\") " Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.031557 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602e9b0f-e15c-4855-a0e5-942f2f37f030-utilities\") pod \"602e9b0f-e15c-4855-a0e5-942f2f37f030\" (UID: \"602e9b0f-e15c-4855-a0e5-942f2f37f030\") " Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.031655 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcjxk\" (UniqueName: \"kubernetes.io/projected/602e9b0f-e15c-4855-a0e5-942f2f37f030-kube-api-access-bcjxk\") pod \"602e9b0f-e15c-4855-a0e5-942f2f37f030\" (UID: \"602e9b0f-e15c-4855-a0e5-942f2f37f030\") " Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.034669 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/602e9b0f-e15c-4855-a0e5-942f2f37f030-utilities" (OuterVolumeSpecName: "utilities") pod "602e9b0f-e15c-4855-a0e5-942f2f37f030" (UID: "602e9b0f-e15c-4855-a0e5-942f2f37f030"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.038401 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/602e9b0f-e15c-4855-a0e5-942f2f37f030-kube-api-access-bcjxk" (OuterVolumeSpecName: "kube-api-access-bcjxk") pod "602e9b0f-e15c-4855-a0e5-942f2f37f030" (UID: "602e9b0f-e15c-4855-a0e5-942f2f37f030"). InnerVolumeSpecName "kube-api-access-bcjxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.107511 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/602e9b0f-e15c-4855-a0e5-942f2f37f030-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "602e9b0f-e15c-4855-a0e5-942f2f37f030" (UID: "602e9b0f-e15c-4855-a0e5-942f2f37f030"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.133963 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcjxk\" (UniqueName: \"kubernetes.io/projected/602e9b0f-e15c-4855-a0e5-942f2f37f030-kube-api-access-bcjxk\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.133999 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602e9b0f-e15c-4855-a0e5-942f2f37f030-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.134008 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602e9b0f-e15c-4855-a0e5-942f2f37f030-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.360862 4751 generic.go:334] "Generic (PLEG): container finished" podID="602e9b0f-e15c-4855-a0e5-942f2f37f030" containerID="2bbc9843de3af408ee3ac6f7e6ea53921b34758a2a248bc31fef67534e126d3d" exitCode=0 Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.360927 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jb6x" event={"ID":"602e9b0f-e15c-4855-a0e5-942f2f37f030","Type":"ContainerDied","Data":"2bbc9843de3af408ee3ac6f7e6ea53921b34758a2a248bc31fef67534e126d3d"} Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.360938 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.360967 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jb6x" event={"ID":"602e9b0f-e15c-4855-a0e5-942f2f37f030","Type":"ContainerDied","Data":"8552e2294dac82fcbf0637ad97d4993ccc8c29e4682eee817b2f69ea8c621a0b"} Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.360985 4751 scope.go:117] "RemoveContainer" containerID="2bbc9843de3af408ee3ac6f7e6ea53921b34758a2a248bc31fef67534e126d3d" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.384414 4751 scope.go:117] "RemoveContainer" containerID="8b583759b6badc88d1b33fd7f2ce5ded4fedddafaece7ba7029511f4fae80f0f" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.400157 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8jb6x"] Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.410863 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8jb6x"] Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.417704 4751 scope.go:117] "RemoveContainer" containerID="c10dff2ac0b8ab462c8bb2c1562ed0a3fc23fecfc917b1232799551324667659" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.490106 4751 scope.go:117] "RemoveContainer" containerID="2bbc9843de3af408ee3ac6f7e6ea53921b34758a2a248bc31fef67534e126d3d" Jan 30 21:48:34 crc kubenswrapper[4751]: E0130 21:48:34.495413 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bbc9843de3af408ee3ac6f7e6ea53921b34758a2a248bc31fef67534e126d3d\": container with ID starting with 2bbc9843de3af408ee3ac6f7e6ea53921b34758a2a248bc31fef67534e126d3d not found: ID does not exist" containerID="2bbc9843de3af408ee3ac6f7e6ea53921b34758a2a248bc31fef67534e126d3d" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.495456 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bbc9843de3af408ee3ac6f7e6ea53921b34758a2a248bc31fef67534e126d3d"} err="failed to get container status \"2bbc9843de3af408ee3ac6f7e6ea53921b34758a2a248bc31fef67534e126d3d\": rpc error: code = NotFound desc = could not find container \"2bbc9843de3af408ee3ac6f7e6ea53921b34758a2a248bc31fef67534e126d3d\": container with ID starting with 2bbc9843de3af408ee3ac6f7e6ea53921b34758a2a248bc31fef67534e126d3d not found: ID does not exist" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.495673 4751 scope.go:117] "RemoveContainer" containerID="8b583759b6badc88d1b33fd7f2ce5ded4fedddafaece7ba7029511f4fae80f0f" Jan 30 21:48:34 crc kubenswrapper[4751]: E0130 21:48:34.496240 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b583759b6badc88d1b33fd7f2ce5ded4fedddafaece7ba7029511f4fae80f0f\": container with ID starting with 8b583759b6badc88d1b33fd7f2ce5ded4fedddafaece7ba7029511f4fae80f0f not found: ID does not exist" containerID="8b583759b6badc88d1b33fd7f2ce5ded4fedddafaece7ba7029511f4fae80f0f" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.496287 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b583759b6badc88d1b33fd7f2ce5ded4fedddafaece7ba7029511f4fae80f0f"} err="failed to get container status \"8b583759b6badc88d1b33fd7f2ce5ded4fedddafaece7ba7029511f4fae80f0f\": rpc error: code = NotFound desc = could not find container \"8b583759b6badc88d1b33fd7f2ce5ded4fedddafaece7ba7029511f4fae80f0f\": container with ID starting with 8b583759b6badc88d1b33fd7f2ce5ded4fedddafaece7ba7029511f4fae80f0f not found: ID does not exist" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.496324 4751 scope.go:117] "RemoveContainer" containerID="c10dff2ac0b8ab462c8bb2c1562ed0a3fc23fecfc917b1232799551324667659" Jan 30 21:48:34 crc kubenswrapper[4751]: E0130 21:48:34.496897 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c10dff2ac0b8ab462c8bb2c1562ed0a3fc23fecfc917b1232799551324667659\": container with ID starting with c10dff2ac0b8ab462c8bb2c1562ed0a3fc23fecfc917b1232799551324667659 not found: ID does not exist" containerID="c10dff2ac0b8ab462c8bb2c1562ed0a3fc23fecfc917b1232799551324667659" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.496922 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c10dff2ac0b8ab462c8bb2c1562ed0a3fc23fecfc917b1232799551324667659"} err="failed to get container status \"c10dff2ac0b8ab462c8bb2c1562ed0a3fc23fecfc917b1232799551324667659\": rpc error: code = NotFound desc = could not find container \"c10dff2ac0b8ab462c8bb2c1562ed0a3fc23fecfc917b1232799551324667659\": container with ID starting with c10dff2ac0b8ab462c8bb2c1562ed0a3fc23fecfc917b1232799551324667659 not found: ID does not exist" Jan 30 21:48:35 crc kubenswrapper[4751]: I0130 21:48:35.988525 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="602e9b0f-e15c-4855-a0e5-942f2f37f030" path="/var/lib/kubelet/pods/602e9b0f-e15c-4855-a0e5-942f2f37f030/volumes" Jan 30 21:48:39 crc kubenswrapper[4751]: I0130 21:48:39.057024 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:39 crc kubenswrapper[4751]: I0130 21:48:39.129461 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:39 crc kubenswrapper[4751]: I0130 21:48:39.501181 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c4tsm"] Jan 30 21:48:40 crc kubenswrapper[4751]: I0130 21:48:40.418181 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c4tsm" podUID="329634a4-1673-4657-a0fb-bbf17bfc55c7" containerName="registry-server" containerID="cri-o://d3ca4235fe0b6e079743a9f3e8a970c463b6fb7d8c2a61a5085e86cba4563b8b" gracePeriod=2 Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.038728 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.048319 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-bq6lp"] Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.058547 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-bq6lp"] Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.161481 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/329634a4-1673-4657-a0fb-bbf17bfc55c7-utilities\") pod \"329634a4-1673-4657-a0fb-bbf17bfc55c7\" (UID: \"329634a4-1673-4657-a0fb-bbf17bfc55c7\") " Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.161633 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/329634a4-1673-4657-a0fb-bbf17bfc55c7-catalog-content\") pod \"329634a4-1673-4657-a0fb-bbf17bfc55c7\" (UID: \"329634a4-1673-4657-a0fb-bbf17bfc55c7\") " Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.161738 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd858\" (UniqueName: \"kubernetes.io/projected/329634a4-1673-4657-a0fb-bbf17bfc55c7-kube-api-access-kd858\") pod \"329634a4-1673-4657-a0fb-bbf17bfc55c7\" (UID: \"329634a4-1673-4657-a0fb-bbf17bfc55c7\") " Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.162599 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/329634a4-1673-4657-a0fb-bbf17bfc55c7-utilities" (OuterVolumeSpecName: "utilities") pod "329634a4-1673-4657-a0fb-bbf17bfc55c7" (UID: "329634a4-1673-4657-a0fb-bbf17bfc55c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.163758 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/329634a4-1673-4657-a0fb-bbf17bfc55c7-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.167553 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/329634a4-1673-4657-a0fb-bbf17bfc55c7-kube-api-access-kd858" (OuterVolumeSpecName: "kube-api-access-kd858") pod "329634a4-1673-4657-a0fb-bbf17bfc55c7" (UID: "329634a4-1673-4657-a0fb-bbf17bfc55c7"). InnerVolumeSpecName "kube-api-access-kd858". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.218371 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/329634a4-1673-4657-a0fb-bbf17bfc55c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "329634a4-1673-4657-a0fb-bbf17bfc55c7" (UID: "329634a4-1673-4657-a0fb-bbf17bfc55c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.266253 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/329634a4-1673-4657-a0fb-bbf17bfc55c7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.266299 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd858\" (UniqueName: \"kubernetes.io/projected/329634a4-1673-4657-a0fb-bbf17bfc55c7-kube-api-access-kd858\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.431248 4751 generic.go:334] "Generic (PLEG): container finished" podID="329634a4-1673-4657-a0fb-bbf17bfc55c7" containerID="d3ca4235fe0b6e079743a9f3e8a970c463b6fb7d8c2a61a5085e86cba4563b8b" exitCode=0 Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.431307 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.431306 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4tsm" event={"ID":"329634a4-1673-4657-a0fb-bbf17bfc55c7","Type":"ContainerDied","Data":"d3ca4235fe0b6e079743a9f3e8a970c463b6fb7d8c2a61a5085e86cba4563b8b"} Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.431423 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4tsm" event={"ID":"329634a4-1673-4657-a0fb-bbf17bfc55c7","Type":"ContainerDied","Data":"7e9e07c3f9270f65741e77482a4cd10ab663916efa79256ff1798d504bbcfbbc"} Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.431449 4751 scope.go:117] "RemoveContainer" containerID="d3ca4235fe0b6e079743a9f3e8a970c463b6fb7d8c2a61a5085e86cba4563b8b" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.463244 4751 scope.go:117] "RemoveContainer" containerID="79084a63ba3f9f58d27216106e2c74ff7f3f05d66c94f99b66213d22feca3240" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.468124 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c4tsm"] Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.478631 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c4tsm"] Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.491696 4751 scope.go:117] "RemoveContainer" containerID="a39d56e573ead34e5ea4a73cdb3f1ebb2184c4cba2db499a09a43670ec878f5c" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.547682 4751 scope.go:117] "RemoveContainer" containerID="d3ca4235fe0b6e079743a9f3e8a970c463b6fb7d8c2a61a5085e86cba4563b8b" Jan 30 21:48:41 crc kubenswrapper[4751]: E0130 21:48:41.548396 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3ca4235fe0b6e079743a9f3e8a970c463b6fb7d8c2a61a5085e86cba4563b8b\": container with ID starting with d3ca4235fe0b6e079743a9f3e8a970c463b6fb7d8c2a61a5085e86cba4563b8b not found: ID does not exist" containerID="d3ca4235fe0b6e079743a9f3e8a970c463b6fb7d8c2a61a5085e86cba4563b8b" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.548464 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3ca4235fe0b6e079743a9f3e8a970c463b6fb7d8c2a61a5085e86cba4563b8b"} err="failed to get container status \"d3ca4235fe0b6e079743a9f3e8a970c463b6fb7d8c2a61a5085e86cba4563b8b\": rpc error: code = NotFound desc = could not find container \"d3ca4235fe0b6e079743a9f3e8a970c463b6fb7d8c2a61a5085e86cba4563b8b\": container with ID starting with d3ca4235fe0b6e079743a9f3e8a970c463b6fb7d8c2a61a5085e86cba4563b8b not found: ID does not exist" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.548511 4751 scope.go:117] "RemoveContainer" containerID="79084a63ba3f9f58d27216106e2c74ff7f3f05d66c94f99b66213d22feca3240" Jan 30 21:48:41 crc kubenswrapper[4751]: E0130 21:48:41.548927 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79084a63ba3f9f58d27216106e2c74ff7f3f05d66c94f99b66213d22feca3240\": container with ID starting with 79084a63ba3f9f58d27216106e2c74ff7f3f05d66c94f99b66213d22feca3240 not found: ID does not exist" containerID="79084a63ba3f9f58d27216106e2c74ff7f3f05d66c94f99b66213d22feca3240" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.548982 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79084a63ba3f9f58d27216106e2c74ff7f3f05d66c94f99b66213d22feca3240"} err="failed to get container status \"79084a63ba3f9f58d27216106e2c74ff7f3f05d66c94f99b66213d22feca3240\": rpc error: code = NotFound desc = could not find container \"79084a63ba3f9f58d27216106e2c74ff7f3f05d66c94f99b66213d22feca3240\": container with ID starting with 79084a63ba3f9f58d27216106e2c74ff7f3f05d66c94f99b66213d22feca3240 not found: ID does not exist" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.549011 4751 scope.go:117] "RemoveContainer" containerID="a39d56e573ead34e5ea4a73cdb3f1ebb2184c4cba2db499a09a43670ec878f5c" Jan 30 21:48:41 crc kubenswrapper[4751]: E0130 21:48:41.549324 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a39d56e573ead34e5ea4a73cdb3f1ebb2184c4cba2db499a09a43670ec878f5c\": container with ID starting with a39d56e573ead34e5ea4a73cdb3f1ebb2184c4cba2db499a09a43670ec878f5c not found: ID does not exist" containerID="a39d56e573ead34e5ea4a73cdb3f1ebb2184c4cba2db499a09a43670ec878f5c" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.549387 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a39d56e573ead34e5ea4a73cdb3f1ebb2184c4cba2db499a09a43670ec878f5c"} err="failed to get container status \"a39d56e573ead34e5ea4a73cdb3f1ebb2184c4cba2db499a09a43670ec878f5c\": rpc error: code = NotFound desc = could not find container \"a39d56e573ead34e5ea4a73cdb3f1ebb2184c4cba2db499a09a43670ec878f5c\": container with ID starting with a39d56e573ead34e5ea4a73cdb3f1ebb2184c4cba2db499a09a43670ec878f5c not found: ID does not exist" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.990097 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="329634a4-1673-4657-a0fb-bbf17bfc55c7" path="/var/lib/kubelet/pods/329634a4-1673-4657-a0fb-bbf17bfc55c7/volumes" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.991067 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="564f3d8f-4b9f-4fe2-9464-baa31d6b7d24" path="/var/lib/kubelet/pods/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24/volumes" Jan 30 21:48:42 crc kubenswrapper[4751]: I0130 21:48:42.106100 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6xw94" podUID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerName="registry-server" probeResult="failure" output=< Jan 30 21:48:42 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:48:42 crc kubenswrapper[4751]: > Jan 30 21:48:52 crc kubenswrapper[4751]: I0130 21:48:52.092750 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6xw94" podUID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerName="registry-server" probeResult="failure" output=< Jan 30 21:48:52 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:48:52 crc kubenswrapper[4751]: > Jan 30 21:49:02 crc kubenswrapper[4751]: I0130 21:49:02.087803 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6xw94" podUID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerName="registry-server" probeResult="failure" output=< Jan 30 21:49:02 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:49:02 crc kubenswrapper[4751]: > Jan 30 21:49:11 crc kubenswrapper[4751]: I0130 21:49:11.097177 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:49:11 crc kubenswrapper[4751]: I0130 21:49:11.151292 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:49:11 crc kubenswrapper[4751]: I0130 21:49:11.342161 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6xw94"] Jan 30 21:49:12 crc kubenswrapper[4751]: I0130 21:49:12.876401 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6xw94" podUID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerName="registry-server" containerID="cri-o://af46ee9eb151afcf0c3abe6e22364af64013d0a98fb7de81278f20213eec90a0" gracePeriod=2 Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.477688 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.558830 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-utilities\") pod \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\" (UID: \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\") " Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.558965 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-catalog-content\") pod \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\" (UID: \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\") " Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.559199 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj8c7\" (UniqueName: \"kubernetes.io/projected/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-kube-api-access-kj8c7\") pod \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\" (UID: \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\") " Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.563000 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-utilities" (OuterVolumeSpecName: "utilities") pod "3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" (UID: "3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.565066 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-kube-api-access-kj8c7" (OuterVolumeSpecName: "kube-api-access-kj8c7") pod "3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" (UID: "3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813"). InnerVolumeSpecName "kube-api-access-kj8c7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.663569 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.663598 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj8c7\" (UniqueName: \"kubernetes.io/projected/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-kube-api-access-kj8c7\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.679304 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" (UID: "3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.765920 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.887699 4751 generic.go:334] "Generic (PLEG): container finished" podID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerID="af46ee9eb151afcf0c3abe6e22364af64013d0a98fb7de81278f20213eec90a0" exitCode=0 Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.887741 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xw94" event={"ID":"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813","Type":"ContainerDied","Data":"af46ee9eb151afcf0c3abe6e22364af64013d0a98fb7de81278f20213eec90a0"} Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.887767 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xw94" event={"ID":"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813","Type":"ContainerDied","Data":"e20468a60c3ca009f382cf700c3f7365eee5860b0272834ef0e7ae4dfae413e1"} Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.887773 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.887786 4751 scope.go:117] "RemoveContainer" containerID="af46ee9eb151afcf0c3abe6e22364af64013d0a98fb7de81278f20213eec90a0" Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.919251 4751 scope.go:117] "RemoveContainer" containerID="9a9c4e80a3a6874da45849277058a1a0698619fd82837be7ed63ab033aaeb0fb" Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.938696 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6xw94"] Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.949189 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6xw94"] Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.960745 4751 scope.go:117] "RemoveContainer" containerID="f98f144d80d133e424f24069c280712338540cdd5d832e0eafae5a98e859041c" Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.990865 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" path="/var/lib/kubelet/pods/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813/volumes" Jan 30 21:49:14 crc kubenswrapper[4751]: I0130 21:49:14.005385 4751 scope.go:117] "RemoveContainer" containerID="af46ee9eb151afcf0c3abe6e22364af64013d0a98fb7de81278f20213eec90a0" Jan 30 21:49:14 crc kubenswrapper[4751]: E0130 21:49:14.005806 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af46ee9eb151afcf0c3abe6e22364af64013d0a98fb7de81278f20213eec90a0\": container with ID starting with af46ee9eb151afcf0c3abe6e22364af64013d0a98fb7de81278f20213eec90a0 not found: ID does not exist" containerID="af46ee9eb151afcf0c3abe6e22364af64013d0a98fb7de81278f20213eec90a0" Jan 30 21:49:14 crc kubenswrapper[4751]: I0130 21:49:14.005841 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af46ee9eb151afcf0c3abe6e22364af64013d0a98fb7de81278f20213eec90a0"} err="failed to get container status \"af46ee9eb151afcf0c3abe6e22364af64013d0a98fb7de81278f20213eec90a0\": rpc error: code = NotFound desc = could not find container \"af46ee9eb151afcf0c3abe6e22364af64013d0a98fb7de81278f20213eec90a0\": container with ID starting with af46ee9eb151afcf0c3abe6e22364af64013d0a98fb7de81278f20213eec90a0 not found: ID does not exist" Jan 30 21:49:14 crc kubenswrapper[4751]: I0130 21:49:14.005866 4751 scope.go:117] "RemoveContainer" containerID="9a9c4e80a3a6874da45849277058a1a0698619fd82837be7ed63ab033aaeb0fb" Jan 30 21:49:14 crc kubenswrapper[4751]: E0130 21:49:14.006453 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a9c4e80a3a6874da45849277058a1a0698619fd82837be7ed63ab033aaeb0fb\": container with ID starting with 9a9c4e80a3a6874da45849277058a1a0698619fd82837be7ed63ab033aaeb0fb not found: ID does not exist" containerID="9a9c4e80a3a6874da45849277058a1a0698619fd82837be7ed63ab033aaeb0fb" Jan 30 21:49:14 crc kubenswrapper[4751]: I0130 21:49:14.006503 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a9c4e80a3a6874da45849277058a1a0698619fd82837be7ed63ab033aaeb0fb"} err="failed to get container status \"9a9c4e80a3a6874da45849277058a1a0698619fd82837be7ed63ab033aaeb0fb\": rpc error: code = NotFound desc = could not find container \"9a9c4e80a3a6874da45849277058a1a0698619fd82837be7ed63ab033aaeb0fb\": container with ID starting with 9a9c4e80a3a6874da45849277058a1a0698619fd82837be7ed63ab033aaeb0fb not found: ID does not exist" Jan 30 21:49:14 crc kubenswrapper[4751]: I0130 21:49:14.006535 4751 scope.go:117] "RemoveContainer" containerID="f98f144d80d133e424f24069c280712338540cdd5d832e0eafae5a98e859041c" Jan 30 21:49:14 crc kubenswrapper[4751]: E0130 21:49:14.006842 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f98f144d80d133e424f24069c280712338540cdd5d832e0eafae5a98e859041c\": container with ID starting with f98f144d80d133e424f24069c280712338540cdd5d832e0eafae5a98e859041c not found: ID does not exist" containerID="f98f144d80d133e424f24069c280712338540cdd5d832e0eafae5a98e859041c" Jan 30 21:49:14 crc kubenswrapper[4751]: I0130 21:49:14.006879 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f98f144d80d133e424f24069c280712338540cdd5d832e0eafae5a98e859041c"} err="failed to get container status \"f98f144d80d133e424f24069c280712338540cdd5d832e0eafae5a98e859041c\": rpc error: code = NotFound desc = could not find container \"f98f144d80d133e424f24069c280712338540cdd5d832e0eafae5a98e859041c\": container with ID starting with f98f144d80d133e424f24069c280712338540cdd5d832e0eafae5a98e859041c not found: ID does not exist" Jan 30 21:49:14 crc kubenswrapper[4751]: I0130 21:49:14.046510 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cdda-account-create-update-xfmk4"] Jan 30 21:49:14 crc kubenswrapper[4751]: I0130 21:49:14.060577 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cdda-account-create-update-xfmk4"] Jan 30 21:49:15 crc kubenswrapper[4751]: I0130 21:49:15.990773 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7625d34-2ace-4774-89e4-72729d19ce99" path="/var/lib/kubelet/pods/f7625d34-2ace-4774-89e4-72729d19ce99/volumes" Jan 30 21:49:22 crc kubenswrapper[4751]: I0130 21:49:22.048058 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-6d52w"] Jan 30 21:49:22 crc kubenswrapper[4751]: I0130 21:49:22.059675 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-6d52w"] Jan 30 21:49:23 crc kubenswrapper[4751]: I0130 21:49:23.045248 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-kj2ld"] Jan 30 21:49:23 crc kubenswrapper[4751]: I0130 21:49:23.059743 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-6e74-account-create-update-gdfb4"] Jan 30 21:49:23 crc kubenswrapper[4751]: I0130 21:49:23.075459 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-hx7xn"] Jan 30 21:49:23 crc kubenswrapper[4751]: I0130 21:49:23.085847 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-kj2ld"] Jan 30 21:49:23 crc kubenswrapper[4751]: I0130 21:49:23.097268 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-6e74-account-create-update-gdfb4"] Jan 30 21:49:23 crc kubenswrapper[4751]: I0130 21:49:23.107129 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-hx7xn"] Jan 30 21:49:23 crc kubenswrapper[4751]: I0130 21:49:23.137141 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2281-account-create-update-5l5m8"] Jan 30 21:49:23 crc kubenswrapper[4751]: I0130 21:49:23.150863 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2281-account-create-update-5l5m8"] Jan 30 21:49:23 crc kubenswrapper[4751]: I0130 21:49:23.999884 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="444e34d6-7904-405b-956e-d23aed56537e" path="/var/lib/kubelet/pods/444e34d6-7904-405b-956e-d23aed56537e/volumes" Jan 30 21:49:24 crc kubenswrapper[4751]: I0130 21:49:24.000981 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f139e0b-3ae5-4d5c-aa87-f15d00373f98" path="/var/lib/kubelet/pods/6f139e0b-3ae5-4d5c-aa87-f15d00373f98/volumes" Jan 30 21:49:24 crc kubenswrapper[4751]: I0130 21:49:24.001627 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a169fb7b-bcf8-44d8-8942-a42a4de6001d" path="/var/lib/kubelet/pods/a169fb7b-bcf8-44d8-8942-a42a4de6001d/volumes" Jan 30 21:49:24 crc kubenswrapper[4751]: I0130 21:49:24.002430 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8312bae-69c5-4c31-844e-42a90c18bfd3" path="/var/lib/kubelet/pods/a8312bae-69c5-4c31-844e-42a90c18bfd3/volumes" Jan 30 21:49:24 crc kubenswrapper[4751]: I0130 21:49:24.003831 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5" path="/var/lib/kubelet/pods/bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5/volumes" Jan 30 21:49:26 crc kubenswrapper[4751]: I0130 21:49:26.078601 4751 scope.go:117] "RemoveContainer" containerID="819319f0811868394aa97eff76f3853ec44f21bc4e3fff54753bf1a73c6cb040" Jan 30 21:49:26 crc kubenswrapper[4751]: I0130 21:49:26.120130 4751 scope.go:117] "RemoveContainer" containerID="bab22938ba50080dd9d55dae8178cb60ae0c855052ff172ac9ad37da3248c397" Jan 30 21:49:26 crc kubenswrapper[4751]: I0130 21:49:26.178424 4751 scope.go:117] "RemoveContainer" containerID="1b2d27c5fa8a33163c2a6acc216d5d997d31face25a7d5b27edce913d857e2cf" Jan 30 21:49:26 crc kubenswrapper[4751]: I0130 21:49:26.224540 4751 scope.go:117] "RemoveContainer" containerID="b0651f2072f5243ce1ec548bf97964b55b91bb1b69f6154e95b941b6b4ae52c4" Jan 30 21:49:26 crc kubenswrapper[4751]: I0130 21:49:26.280657 4751 scope.go:117] "RemoveContainer" containerID="95a526791a15478d3cd5022079224ebd6df133da08d333aae07b4e691a9b11fa" Jan 30 21:49:26 crc kubenswrapper[4751]: I0130 21:49:26.349707 4751 scope.go:117] "RemoveContainer" containerID="653c6822da8dfb62b9974deaabbf6807b6ceb59b253232e41aa972ac9d77b452" Jan 30 21:49:26 crc kubenswrapper[4751]: I0130 21:49:26.381954 4751 scope.go:117] "RemoveContainer" containerID="2c32df1a18d1df9fb91c33d8041010429300b75cb742162ff675699f4b703b35" Jan 30 21:49:26 crc kubenswrapper[4751]: I0130 21:49:26.406592 4751 scope.go:117] "RemoveContainer" containerID="801374aeb4ac1cff7c0c384bd6f348009c3a008674d2c7a597e16dd316c97dcd" Jan 30 21:49:26 crc kubenswrapper[4751]: I0130 21:49:26.432869 4751 scope.go:117] "RemoveContainer" containerID="1417d08c74e8789435dc5b0b0ef29190de93021ca824a689f4696bce6b1679a8" Jan 30 21:49:26 crc kubenswrapper[4751]: I0130 21:49:26.462976 4751 scope.go:117] "RemoveContainer" containerID="2a0909f318a30556974662d8829ef78a359e73d89a596474535f309a8b496094" Jan 30 21:49:54 crc kubenswrapper[4751]: I0130 21:49:54.126634 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:49:54 crc kubenswrapper[4751]: I0130 21:49:54.127597 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:49:59 crc kubenswrapper[4751]: I0130 21:49:59.049960 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-47sz5"] Jan 30 21:49:59 crc kubenswrapper[4751]: I0130 21:49:59.061902 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-47sz5"] Jan 30 21:49:59 crc kubenswrapper[4751]: I0130 21:49:59.990895 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="551aecfb-7969-4644-ac50-b8f4c63002d3" path="/var/lib/kubelet/pods/551aecfb-7969-4644-ac50-b8f4c63002d3/volumes" Jan 30 21:50:19 crc kubenswrapper[4751]: I0130 21:50:19.046390 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0010-account-create-update-t2pkp"] Jan 30 21:50:19 crc kubenswrapper[4751]: I0130 21:50:19.057216 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0010-account-create-update-t2pkp"] Jan 30 21:50:19 crc kubenswrapper[4751]: I0130 21:50:19.991580 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eb55a4f-933c-4871-b2d4-aed75e1449d7" path="/var/lib/kubelet/pods/8eb55a4f-933c-4871-b2d4-aed75e1449d7/volumes" Jan 30 21:50:20 crc kubenswrapper[4751]: I0130 21:50:20.030072 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-vm8dd"] Jan 30 21:50:20 crc kubenswrapper[4751]: I0130 21:50:20.040919 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-vm8dd"] Jan 30 21:50:21 crc kubenswrapper[4751]: I0130 21:50:21.991108 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f243fc38-73c3-44ef-98b1-8c3086761087" path="/var/lib/kubelet/pods/f243fc38-73c3-44ef-98b1-8c3086761087/volumes" Jan 30 21:50:24 crc kubenswrapper[4751]: I0130 21:50:24.037452 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-x464h"] Jan 30 21:50:24 crc kubenswrapper[4751]: I0130 21:50:24.048310 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-x464h"] Jan 30 21:50:24 crc kubenswrapper[4751]: I0130 21:50:24.126760 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:50:24 crc kubenswrapper[4751]: I0130 21:50:24.126812 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:50:25 crc kubenswrapper[4751]: I0130 21:50:25.991160 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd" path="/var/lib/kubelet/pods/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd/volumes" Jan 30 21:50:26 crc kubenswrapper[4751]: I0130 21:50:26.754214 4751 scope.go:117] "RemoveContainer" containerID="c7bbfed7681d291cdba9800f3f96dcb721e8fd4853af323f0d29ccee985d7e37" Jan 30 21:50:26 crc kubenswrapper[4751]: I0130 21:50:26.788724 4751 scope.go:117] "RemoveContainer" containerID="33e12e7a910a881a922ed171c1d2a5e92dc23378252c88a8cc488f46dcc7cd9c" Jan 30 21:50:26 crc kubenswrapper[4751]: I0130 21:50:26.894819 4751 scope.go:117] "RemoveContainer" containerID="6d49b61e92e6eef2d8083686a2afeb4d6ae7d468f3b7fa9aa7d17b2c30415daf" Jan 30 21:50:26 crc kubenswrapper[4751]: I0130 21:50:26.960319 4751 scope.go:117] "RemoveContainer" containerID="17690b46bb105b4071eb9244efb55112436407df788ac66de199405e58cab561" Jan 30 21:50:31 crc kubenswrapper[4751]: I0130 21:50:31.062166 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-php6q"] Jan 30 21:50:31 crc kubenswrapper[4751]: I0130 21:50:31.079934 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-php6q"] Jan 30 21:50:31 crc kubenswrapper[4751]: I0130 21:50:31.998743 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e675971e-ba0e-4630-bc1b-bdf47a433dd7" path="/var/lib/kubelet/pods/e675971e-ba0e-4630-bc1b-bdf47a433dd7/volumes" Jan 30 21:50:33 crc kubenswrapper[4751]: I0130 21:50:33.844885 4751 generic.go:334] "Generic (PLEG): container finished" podID="b45d4d88-6b91-4bfc-9619-68fdb7d90f05" containerID="c30ac0e8be6c0815dba51db8d797c69e9ca710b0af1c400a4c5b3a4c953e6ebf" exitCode=0 Jan 30 21:50:33 crc kubenswrapper[4751]: I0130 21:50:33.844963 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" event={"ID":"b45d4d88-6b91-4bfc-9619-68fdb7d90f05","Type":"ContainerDied","Data":"c30ac0e8be6c0815dba51db8d797c69e9ca710b0af1c400a4c5b3a4c953e6ebf"} Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.407557 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.519719 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wz8d\" (UniqueName: \"kubernetes.io/projected/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-kube-api-access-6wz8d\") pod \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\" (UID: \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\") " Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.519971 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-inventory\") pod \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\" (UID: \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\") " Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.520034 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-ssh-key-openstack-edpm-ipam\") pod \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\" (UID: \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\") " Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.525624 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-kube-api-access-6wz8d" (OuterVolumeSpecName: "kube-api-access-6wz8d") pod "b45d4d88-6b91-4bfc-9619-68fdb7d90f05" (UID: "b45d4d88-6b91-4bfc-9619-68fdb7d90f05"). InnerVolumeSpecName "kube-api-access-6wz8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.567181 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b45d4d88-6b91-4bfc-9619-68fdb7d90f05" (UID: "b45d4d88-6b91-4bfc-9619-68fdb7d90f05"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.582414 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-inventory" (OuterVolumeSpecName: "inventory") pod "b45d4d88-6b91-4bfc-9619-68fdb7d90f05" (UID: "b45d4d88-6b91-4bfc-9619-68fdb7d90f05"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.623216 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wz8d\" (UniqueName: \"kubernetes.io/projected/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-kube-api-access-6wz8d\") on node \"crc\" DevicePath \"\"" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.623795 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.623928 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.869191 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" event={"ID":"b45d4d88-6b91-4bfc-9619-68fdb7d90f05","Type":"ContainerDied","Data":"871112c40c834fc0ee43096e7b64ddfc12d71cae78e7a64ab8d6c06bfe6ebb40"} Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.869498 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="871112c40c834fc0ee43096e7b64ddfc12d71cae78e7a64ab8d6c06bfe6ebb40" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.869291 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.967912 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz"] Jan 30 21:50:35 crc kubenswrapper[4751]: E0130 21:50:35.968411 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602e9b0f-e15c-4855-a0e5-942f2f37f030" containerName="extract-content" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968429 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="602e9b0f-e15c-4855-a0e5-942f2f37f030" containerName="extract-content" Jan 30 21:50:35 crc kubenswrapper[4751]: E0130 21:50:35.968440 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602e9b0f-e15c-4855-a0e5-942f2f37f030" containerName="registry-server" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968447 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="602e9b0f-e15c-4855-a0e5-942f2f37f030" containerName="registry-server" Jan 30 21:50:35 crc kubenswrapper[4751]: E0130 21:50:35.968459 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerName="registry-server" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968465 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerName="registry-server" Jan 30 21:50:35 crc kubenswrapper[4751]: E0130 21:50:35.968473 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerName="extract-utilities" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968479 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerName="extract-utilities" Jan 30 21:50:35 crc kubenswrapper[4751]: E0130 21:50:35.968488 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="329634a4-1673-4657-a0fb-bbf17bfc55c7" containerName="registry-server" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968495 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="329634a4-1673-4657-a0fb-bbf17bfc55c7" containerName="registry-server" Jan 30 21:50:35 crc kubenswrapper[4751]: E0130 21:50:35.968509 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602e9b0f-e15c-4855-a0e5-942f2f37f030" containerName="extract-utilities" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968514 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="602e9b0f-e15c-4855-a0e5-942f2f37f030" containerName="extract-utilities" Jan 30 21:50:35 crc kubenswrapper[4751]: E0130 21:50:35.968530 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerName="extract-content" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968536 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerName="extract-content" Jan 30 21:50:35 crc kubenswrapper[4751]: E0130 21:50:35.968550 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45d4d88-6b91-4bfc-9619-68fdb7d90f05" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968561 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45d4d88-6b91-4bfc-9619-68fdb7d90f05" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 21:50:35 crc kubenswrapper[4751]: E0130 21:50:35.968583 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="329634a4-1673-4657-a0fb-bbf17bfc55c7" containerName="extract-content" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968590 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="329634a4-1673-4657-a0fb-bbf17bfc55c7" containerName="extract-content" Jan 30 21:50:35 crc kubenswrapper[4751]: E0130 21:50:35.968608 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="329634a4-1673-4657-a0fb-bbf17bfc55c7" containerName="extract-utilities" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968613 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="329634a4-1673-4657-a0fb-bbf17bfc55c7" containerName="extract-utilities" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968820 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerName="registry-server" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968841 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="329634a4-1673-4657-a0fb-bbf17bfc55c7" containerName="registry-server" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968859 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="602e9b0f-e15c-4855-a0e5-942f2f37f030" containerName="registry-server" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968877 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45d4d88-6b91-4bfc-9619-68fdb7d90f05" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.969913 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.979461 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.979754 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.979898 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.980068 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:50:36 crc kubenswrapper[4751]: I0130 21:50:36.015357 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz"] Jan 30 21:50:36 crc kubenswrapper[4751]: I0130 21:50:36.138190 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv8xw\" (UniqueName: \"kubernetes.io/projected/a21b5781-ce12-434c-9f38-47bf5f6ad332-kube-api-access-gv8xw\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz\" (UID: \"a21b5781-ce12-434c-9f38-47bf5f6ad332\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" Jan 30 21:50:36 crc kubenswrapper[4751]: I0130 21:50:36.138276 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a21b5781-ce12-434c-9f38-47bf5f6ad332-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz\" (UID: \"a21b5781-ce12-434c-9f38-47bf5f6ad332\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" Jan 30 21:50:36 crc kubenswrapper[4751]: I0130 21:50:36.138346 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a21b5781-ce12-434c-9f38-47bf5f6ad332-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz\" (UID: \"a21b5781-ce12-434c-9f38-47bf5f6ad332\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" Jan 30 21:50:36 crc kubenswrapper[4751]: I0130 21:50:36.241292 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv8xw\" (UniqueName: \"kubernetes.io/projected/a21b5781-ce12-434c-9f38-47bf5f6ad332-kube-api-access-gv8xw\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz\" (UID: \"a21b5781-ce12-434c-9f38-47bf5f6ad332\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" Jan 30 21:50:36 crc kubenswrapper[4751]: I0130 21:50:36.241437 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a21b5781-ce12-434c-9f38-47bf5f6ad332-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz\" (UID: \"a21b5781-ce12-434c-9f38-47bf5f6ad332\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" Jan 30 21:50:36 crc kubenswrapper[4751]: I0130 21:50:36.241502 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a21b5781-ce12-434c-9f38-47bf5f6ad332-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz\" (UID: \"a21b5781-ce12-434c-9f38-47bf5f6ad332\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" Jan 30 21:50:36 crc kubenswrapper[4751]: I0130 21:50:36.246040 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a21b5781-ce12-434c-9f38-47bf5f6ad332-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz\" (UID: \"a21b5781-ce12-434c-9f38-47bf5f6ad332\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" Jan 30 21:50:36 crc kubenswrapper[4751]: I0130 21:50:36.252411 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a21b5781-ce12-434c-9f38-47bf5f6ad332-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz\" (UID: \"a21b5781-ce12-434c-9f38-47bf5f6ad332\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" Jan 30 21:50:36 crc kubenswrapper[4751]: I0130 21:50:36.258419 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv8xw\" (UniqueName: \"kubernetes.io/projected/a21b5781-ce12-434c-9f38-47bf5f6ad332-kube-api-access-gv8xw\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz\" (UID: \"a21b5781-ce12-434c-9f38-47bf5f6ad332\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" Jan 30 21:50:36 crc kubenswrapper[4751]: I0130 21:50:36.313836 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" Jan 30 21:50:36 crc kubenswrapper[4751]: I0130 21:50:36.887716 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz"] Jan 30 21:50:37 crc kubenswrapper[4751]: I0130 21:50:37.897910 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" event={"ID":"a21b5781-ce12-434c-9f38-47bf5f6ad332","Type":"ContainerStarted","Data":"a7c779b234512f8b77468952ed7db747aa6f4d564b951f04213d13fdfbc73d63"} Jan 30 21:50:37 crc kubenswrapper[4751]: I0130 21:50:37.898143 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" event={"ID":"a21b5781-ce12-434c-9f38-47bf5f6ad332","Type":"ContainerStarted","Data":"8b16809b402a2b33c62fd6eb1118ebc1aaf20a76b782d78a45f03fb5b59e3144"} Jan 30 21:50:37 crc kubenswrapper[4751]: I0130 21:50:37.914294 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" podStartSLOduration=2.492257927 podStartE2EDuration="2.914259898s" podCreationTimestamp="2026-01-30 21:50:35 +0000 UTC" firstStartedPulling="2026-01-30 21:50:36.897544163 +0000 UTC m=+2175.643366832" lastFinishedPulling="2026-01-30 21:50:37.319546154 +0000 UTC m=+2176.065368803" observedRunningTime="2026-01-30 21:50:37.912694036 +0000 UTC m=+2176.658516695" watchObservedRunningTime="2026-01-30 21:50:37.914259898 +0000 UTC m=+2176.660082597" Jan 30 21:50:54 crc kubenswrapper[4751]: I0130 21:50:54.126676 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:50:54 crc kubenswrapper[4751]: I0130 21:50:54.129215 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:50:54 crc kubenswrapper[4751]: I0130 21:50:54.129387 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:50:54 crc kubenswrapper[4751]: I0130 21:50:54.130564 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:50:54 crc kubenswrapper[4751]: I0130 21:50:54.130720 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" gracePeriod=600 Jan 30 21:50:54 crc kubenswrapper[4751]: E0130 21:50:54.278106 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:50:55 crc kubenswrapper[4751]: I0130 21:50:55.075900 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" exitCode=0 Jan 30 21:50:55 crc kubenswrapper[4751]: I0130 21:50:55.075969 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9"} Jan 30 21:50:55 crc kubenswrapper[4751]: I0130 21:50:55.076798 4751 scope.go:117] "RemoveContainer" containerID="e83ca35bd085af955b4b3e0476bcb9169304b85473995bcb3f76de779bdcffb0" Jan 30 21:50:55 crc kubenswrapper[4751]: I0130 21:50:55.078080 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:50:55 crc kubenswrapper[4751]: E0130 21:50:55.078627 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:51:06 crc kubenswrapper[4751]: I0130 21:51:06.977177 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:51:06 crc kubenswrapper[4751]: E0130 21:51:06.979146 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:51:12 crc kubenswrapper[4751]: I0130 21:51:12.048348 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-cxj8k"] Jan 30 21:51:12 crc kubenswrapper[4751]: I0130 21:51:12.059591 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-cxj8k"] Jan 30 21:51:13 crc kubenswrapper[4751]: I0130 21:51:13.994731 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97ec060c-3c30-41e4-946c-7fb4584c7e85" path="/var/lib/kubelet/pods/97ec060c-3c30-41e4-946c-7fb4584c7e85/volumes" Jan 30 21:51:21 crc kubenswrapper[4751]: I0130 21:51:21.997972 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:51:22 crc kubenswrapper[4751]: E0130 21:51:22.000828 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:51:27 crc kubenswrapper[4751]: I0130 21:51:27.118051 4751 scope.go:117] "RemoveContainer" containerID="15b7f342e1abdf85738c2010d50dd3bc6a8ad893d7ecab47d753b5a1b032305d" Jan 30 21:51:27 crc kubenswrapper[4751]: I0130 21:51:27.161598 4751 scope.go:117] "RemoveContainer" containerID="d9cc3235ea6a465f2a125270f4c9765fed925e13c4baa2e715494daa6238d33f" Jan 30 21:51:36 crc kubenswrapper[4751]: I0130 21:51:36.976069 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:51:36 crc kubenswrapper[4751]: E0130 21:51:36.977162 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:51:40 crc kubenswrapper[4751]: I0130 21:51:40.543269 4751 generic.go:334] "Generic (PLEG): container finished" podID="a21b5781-ce12-434c-9f38-47bf5f6ad332" containerID="a7c779b234512f8b77468952ed7db747aa6f4d564b951f04213d13fdfbc73d63" exitCode=0 Jan 30 21:51:40 crc kubenswrapper[4751]: I0130 21:51:40.543433 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" event={"ID":"a21b5781-ce12-434c-9f38-47bf5f6ad332","Type":"ContainerDied","Data":"a7c779b234512f8b77468952ed7db747aa6f4d564b951f04213d13fdfbc73d63"} Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.053690 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.109905 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a21b5781-ce12-434c-9f38-47bf5f6ad332-ssh-key-openstack-edpm-ipam\") pod \"a21b5781-ce12-434c-9f38-47bf5f6ad332\" (UID: \"a21b5781-ce12-434c-9f38-47bf5f6ad332\") " Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.110046 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a21b5781-ce12-434c-9f38-47bf5f6ad332-inventory\") pod \"a21b5781-ce12-434c-9f38-47bf5f6ad332\" (UID: \"a21b5781-ce12-434c-9f38-47bf5f6ad332\") " Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.110921 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv8xw\" (UniqueName: \"kubernetes.io/projected/a21b5781-ce12-434c-9f38-47bf5f6ad332-kube-api-access-gv8xw\") pod \"a21b5781-ce12-434c-9f38-47bf5f6ad332\" (UID: \"a21b5781-ce12-434c-9f38-47bf5f6ad332\") " Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.128602 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a21b5781-ce12-434c-9f38-47bf5f6ad332-kube-api-access-gv8xw" (OuterVolumeSpecName: "kube-api-access-gv8xw") pod "a21b5781-ce12-434c-9f38-47bf5f6ad332" (UID: "a21b5781-ce12-434c-9f38-47bf5f6ad332"). InnerVolumeSpecName "kube-api-access-gv8xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.143467 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a21b5781-ce12-434c-9f38-47bf5f6ad332-inventory" (OuterVolumeSpecName: "inventory") pod "a21b5781-ce12-434c-9f38-47bf5f6ad332" (UID: "a21b5781-ce12-434c-9f38-47bf5f6ad332"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.175552 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a21b5781-ce12-434c-9f38-47bf5f6ad332-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a21b5781-ce12-434c-9f38-47bf5f6ad332" (UID: "a21b5781-ce12-434c-9f38-47bf5f6ad332"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.214444 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a21b5781-ce12-434c-9f38-47bf5f6ad332-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.214482 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv8xw\" (UniqueName: \"kubernetes.io/projected/a21b5781-ce12-434c-9f38-47bf5f6ad332-kube-api-access-gv8xw\") on node \"crc\" DevicePath \"\"" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.214500 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a21b5781-ce12-434c-9f38-47bf5f6ad332-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.576702 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" event={"ID":"a21b5781-ce12-434c-9f38-47bf5f6ad332","Type":"ContainerDied","Data":"8b16809b402a2b33c62fd6eb1118ebc1aaf20a76b782d78a45f03fb5b59e3144"} Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.577018 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b16809b402a2b33c62fd6eb1118ebc1aaf20a76b782d78a45f03fb5b59e3144" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.577116 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.666533 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5"] Jan 30 21:51:42 crc kubenswrapper[4751]: E0130 21:51:42.667316 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a21b5781-ce12-434c-9f38-47bf5f6ad332" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.667435 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a21b5781-ce12-434c-9f38-47bf5f6ad332" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.667816 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a21b5781-ce12-434c-9f38-47bf5f6ad332" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.669244 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.675142 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.675465 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.675630 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.675858 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.676755 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5"] Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.726710 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/538f9f69-1642-4944-a5e1-7348a104c5e6-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-plxr5\" (UID: \"538f9f69-1642-4944-a5e1-7348a104c5e6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.727310 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/538f9f69-1642-4944-a5e1-7348a104c5e6-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-plxr5\" (UID: \"538f9f69-1642-4944-a5e1-7348a104c5e6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.727477 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knq68\" (UniqueName: \"kubernetes.io/projected/538f9f69-1642-4944-a5e1-7348a104c5e6-kube-api-access-knq68\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-plxr5\" (UID: \"538f9f69-1642-4944-a5e1-7348a104c5e6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.829145 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/538f9f69-1642-4944-a5e1-7348a104c5e6-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-plxr5\" (UID: \"538f9f69-1642-4944-a5e1-7348a104c5e6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.829197 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knq68\" (UniqueName: \"kubernetes.io/projected/538f9f69-1642-4944-a5e1-7348a104c5e6-kube-api-access-knq68\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-plxr5\" (UID: \"538f9f69-1642-4944-a5e1-7348a104c5e6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.829372 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/538f9f69-1642-4944-a5e1-7348a104c5e6-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-plxr5\" (UID: \"538f9f69-1642-4944-a5e1-7348a104c5e6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.833788 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/538f9f69-1642-4944-a5e1-7348a104c5e6-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-plxr5\" (UID: \"538f9f69-1642-4944-a5e1-7348a104c5e6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.834161 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/538f9f69-1642-4944-a5e1-7348a104c5e6-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-plxr5\" (UID: \"538f9f69-1642-4944-a5e1-7348a104c5e6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.846890 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knq68\" (UniqueName: \"kubernetes.io/projected/538f9f69-1642-4944-a5e1-7348a104c5e6-kube-api-access-knq68\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-plxr5\" (UID: \"538f9f69-1642-4944-a5e1-7348a104c5e6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.996475 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" Jan 30 21:51:43 crc kubenswrapper[4751]: I0130 21:51:43.532834 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5"] Jan 30 21:51:43 crc kubenswrapper[4751]: I0130 21:51:43.589418 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" event={"ID":"538f9f69-1642-4944-a5e1-7348a104c5e6","Type":"ContainerStarted","Data":"e219e09b3c857b7e7aa40769b83e0190b3c1bc2841c1fcd0227b1672de545324"} Jan 30 21:51:44 crc kubenswrapper[4751]: I0130 21:51:44.605484 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" event={"ID":"538f9f69-1642-4944-a5e1-7348a104c5e6","Type":"ContainerStarted","Data":"cb8ae5e2affef33560219351fbe7944b686569dcc42ec07890013c934f74a73a"} Jan 30 21:51:44 crc kubenswrapper[4751]: I0130 21:51:44.634589 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" podStartSLOduration=2.188709216 podStartE2EDuration="2.634545891s" podCreationTimestamp="2026-01-30 21:51:42 +0000 UTC" firstStartedPulling="2026-01-30 21:51:43.536136721 +0000 UTC m=+2242.281959390" lastFinishedPulling="2026-01-30 21:51:43.981973416 +0000 UTC m=+2242.727796065" observedRunningTime="2026-01-30 21:51:44.624152428 +0000 UTC m=+2243.369975077" watchObservedRunningTime="2026-01-30 21:51:44.634545891 +0000 UTC m=+2243.380368540" Jan 30 21:51:47 crc kubenswrapper[4751]: I0130 21:51:47.976866 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:51:47 crc kubenswrapper[4751]: E0130 21:51:47.977814 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:51:48 crc kubenswrapper[4751]: I0130 21:51:48.648957 4751 generic.go:334] "Generic (PLEG): container finished" podID="538f9f69-1642-4944-a5e1-7348a104c5e6" containerID="cb8ae5e2affef33560219351fbe7944b686569dcc42ec07890013c934f74a73a" exitCode=0 Jan 30 21:51:48 crc kubenswrapper[4751]: I0130 21:51:48.649026 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" event={"ID":"538f9f69-1642-4944-a5e1-7348a104c5e6","Type":"ContainerDied","Data":"cb8ae5e2affef33560219351fbe7944b686569dcc42ec07890013c934f74a73a"} Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.141211 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.229034 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/538f9f69-1642-4944-a5e1-7348a104c5e6-ssh-key-openstack-edpm-ipam\") pod \"538f9f69-1642-4944-a5e1-7348a104c5e6\" (UID: \"538f9f69-1642-4944-a5e1-7348a104c5e6\") " Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.229206 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knq68\" (UniqueName: \"kubernetes.io/projected/538f9f69-1642-4944-a5e1-7348a104c5e6-kube-api-access-knq68\") pod \"538f9f69-1642-4944-a5e1-7348a104c5e6\" (UID: \"538f9f69-1642-4944-a5e1-7348a104c5e6\") " Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.229451 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/538f9f69-1642-4944-a5e1-7348a104c5e6-inventory\") pod \"538f9f69-1642-4944-a5e1-7348a104c5e6\" (UID: \"538f9f69-1642-4944-a5e1-7348a104c5e6\") " Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.238406 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/538f9f69-1642-4944-a5e1-7348a104c5e6-kube-api-access-knq68" (OuterVolumeSpecName: "kube-api-access-knq68") pod "538f9f69-1642-4944-a5e1-7348a104c5e6" (UID: "538f9f69-1642-4944-a5e1-7348a104c5e6"). InnerVolumeSpecName "kube-api-access-knq68". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.265213 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/538f9f69-1642-4944-a5e1-7348a104c5e6-inventory" (OuterVolumeSpecName: "inventory") pod "538f9f69-1642-4944-a5e1-7348a104c5e6" (UID: "538f9f69-1642-4944-a5e1-7348a104c5e6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.288516 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/538f9f69-1642-4944-a5e1-7348a104c5e6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "538f9f69-1642-4944-a5e1-7348a104c5e6" (UID: "538f9f69-1642-4944-a5e1-7348a104c5e6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.332471 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/538f9f69-1642-4944-a5e1-7348a104c5e6-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.332512 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/538f9f69-1642-4944-a5e1-7348a104c5e6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.332522 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knq68\" (UniqueName: \"kubernetes.io/projected/538f9f69-1642-4944-a5e1-7348a104c5e6-kube-api-access-knq68\") on node \"crc\" DevicePath \"\"" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.674735 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" event={"ID":"538f9f69-1642-4944-a5e1-7348a104c5e6","Type":"ContainerDied","Data":"e219e09b3c857b7e7aa40769b83e0190b3c1bc2841c1fcd0227b1672de545324"} Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.674799 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e219e09b3c857b7e7aa40769b83e0190b3c1bc2841c1fcd0227b1672de545324" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.674837 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.763738 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7"] Jan 30 21:51:50 crc kubenswrapper[4751]: E0130 21:51:50.764264 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="538f9f69-1642-4944-a5e1-7348a104c5e6" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.764284 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="538f9f69-1642-4944-a5e1-7348a104c5e6" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.764595 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="538f9f69-1642-4944-a5e1-7348a104c5e6" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.765449 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.777812 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.778010 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.778015 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.778157 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.797499 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7"] Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.846852 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-trgr7\" (UID: \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.847217 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2ct4\" (UniqueName: \"kubernetes.io/projected/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-kube-api-access-d2ct4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-trgr7\" (UID: \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.847313 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-trgr7\" (UID: \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.949023 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2ct4\" (UniqueName: \"kubernetes.io/projected/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-kube-api-access-d2ct4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-trgr7\" (UID: \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.949179 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-trgr7\" (UID: \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.949265 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-trgr7\" (UID: \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.953112 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-trgr7\" (UID: \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.953669 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-trgr7\" (UID: \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.966985 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2ct4\" (UniqueName: \"kubernetes.io/projected/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-kube-api-access-d2ct4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-trgr7\" (UID: \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" Jan 30 21:51:51 crc kubenswrapper[4751]: I0130 21:51:51.095581 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" Jan 30 21:51:51 crc kubenswrapper[4751]: I0130 21:51:51.663986 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7"] Jan 30 21:51:51 crc kubenswrapper[4751]: I0130 21:51:51.684361 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" event={"ID":"aa80e137-3a03-4857-9ec0-aa2f9b58df0d","Type":"ContainerStarted","Data":"a9f316a0cd1ae67e29d2d85363c55f6874baca6a681b729bfa02385b2545da1a"} Jan 30 21:51:52 crc kubenswrapper[4751]: I0130 21:51:52.697514 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" event={"ID":"aa80e137-3a03-4857-9ec0-aa2f9b58df0d","Type":"ContainerStarted","Data":"367fd3667550a5d0d4cc5ab26c8f9f154492a6ce72190b8a861ed642c787870f"} Jan 30 21:51:52 crc kubenswrapper[4751]: I0130 21:51:52.726796 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" podStartSLOduration=2.289203623 podStartE2EDuration="2.726775993s" podCreationTimestamp="2026-01-30 21:51:50 +0000 UTC" firstStartedPulling="2026-01-30 21:51:51.666383529 +0000 UTC m=+2250.412206188" lastFinishedPulling="2026-01-30 21:51:52.103955909 +0000 UTC m=+2250.849778558" observedRunningTime="2026-01-30 21:51:52.721004366 +0000 UTC m=+2251.466827025" watchObservedRunningTime="2026-01-30 21:51:52.726775993 +0000 UTC m=+2251.472598652" Jan 30 21:52:00 crc kubenswrapper[4751]: I0130 21:52:00.976352 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:52:00 crc kubenswrapper[4751]: E0130 21:52:00.977259 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:52:11 crc kubenswrapper[4751]: I0130 21:52:11.986260 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:52:11 crc kubenswrapper[4751]: E0130 21:52:11.987260 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:52:21 crc kubenswrapper[4751]: I0130 21:52:21.412122 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7xkmv"] Jan 30 21:52:21 crc kubenswrapper[4751]: I0130 21:52:21.415991 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:21 crc kubenswrapper[4751]: I0130 21:52:21.423824 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xkmv"] Jan 30 21:52:21 crc kubenswrapper[4751]: I0130 21:52:21.524080 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfs2m\" (UniqueName: \"kubernetes.io/projected/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-kube-api-access-tfs2m\") pod \"redhat-marketplace-7xkmv\" (UID: \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\") " pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:21 crc kubenswrapper[4751]: I0130 21:52:21.524312 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-utilities\") pod \"redhat-marketplace-7xkmv\" (UID: \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\") " pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:21 crc kubenswrapper[4751]: I0130 21:52:21.524512 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-catalog-content\") pod \"redhat-marketplace-7xkmv\" (UID: \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\") " pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:21 crc kubenswrapper[4751]: I0130 21:52:21.627289 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-utilities\") pod \"redhat-marketplace-7xkmv\" (UID: \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\") " pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:21 crc kubenswrapper[4751]: I0130 21:52:21.627442 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-catalog-content\") pod \"redhat-marketplace-7xkmv\" (UID: \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\") " pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:21 crc kubenswrapper[4751]: I0130 21:52:21.627634 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfs2m\" (UniqueName: \"kubernetes.io/projected/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-kube-api-access-tfs2m\") pod \"redhat-marketplace-7xkmv\" (UID: \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\") " pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:21 crc kubenswrapper[4751]: I0130 21:52:21.627931 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-utilities\") pod \"redhat-marketplace-7xkmv\" (UID: \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\") " pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:21 crc kubenswrapper[4751]: I0130 21:52:21.628413 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-catalog-content\") pod \"redhat-marketplace-7xkmv\" (UID: \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\") " pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:21 crc kubenswrapper[4751]: I0130 21:52:21.651906 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfs2m\" (UniqueName: \"kubernetes.io/projected/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-kube-api-access-tfs2m\") pod \"redhat-marketplace-7xkmv\" (UID: \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\") " pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:21 crc kubenswrapper[4751]: I0130 21:52:21.780161 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:22 crc kubenswrapper[4751]: I0130 21:52:22.335516 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xkmv"] Jan 30 21:52:22 crc kubenswrapper[4751]: I0130 21:52:22.975490 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:52:22 crc kubenswrapper[4751]: E0130 21:52:22.976084 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:52:23 crc kubenswrapper[4751]: I0130 21:52:23.015132 4751 generic.go:334] "Generic (PLEG): container finished" podID="7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" containerID="b131183dd70f412f27fa37decba685afe22b38b2dc2ea3b2dbed009285fff14d" exitCode=0 Jan 30 21:52:23 crc kubenswrapper[4751]: I0130 21:52:23.015186 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xkmv" event={"ID":"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60","Type":"ContainerDied","Data":"b131183dd70f412f27fa37decba685afe22b38b2dc2ea3b2dbed009285fff14d"} Jan 30 21:52:23 crc kubenswrapper[4751]: I0130 21:52:23.015216 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xkmv" event={"ID":"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60","Type":"ContainerStarted","Data":"09ba0b000866b341461be8fd8aa4a6998ab86161b69ce1d54a289f04c444e094"} Jan 30 21:52:25 crc kubenswrapper[4751]: I0130 21:52:25.047945 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xkmv" event={"ID":"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60","Type":"ContainerStarted","Data":"2525c163047c82fc8ed0efeed7c11591a753132791c4d3b9128d49e646a268ac"} Jan 30 21:52:25 crc kubenswrapper[4751]: I0130 21:52:25.054843 4751 generic.go:334] "Generic (PLEG): container finished" podID="aa80e137-3a03-4857-9ec0-aa2f9b58df0d" containerID="367fd3667550a5d0d4cc5ab26c8f9f154492a6ce72190b8a861ed642c787870f" exitCode=0 Jan 30 21:52:25 crc kubenswrapper[4751]: I0130 21:52:25.054881 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" event={"ID":"aa80e137-3a03-4857-9ec0-aa2f9b58df0d","Type":"ContainerDied","Data":"367fd3667550a5d0d4cc5ab26c8f9f154492a6ce72190b8a861ed642c787870f"} Jan 30 21:52:26 crc kubenswrapper[4751]: I0130 21:52:26.065874 4751 generic.go:334] "Generic (PLEG): container finished" podID="7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" containerID="2525c163047c82fc8ed0efeed7c11591a753132791c4d3b9128d49e646a268ac" exitCode=0 Jan 30 21:52:26 crc kubenswrapper[4751]: I0130 21:52:26.065948 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xkmv" event={"ID":"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60","Type":"ContainerDied","Data":"2525c163047c82fc8ed0efeed7c11591a753132791c4d3b9128d49e646a268ac"} Jan 30 21:52:26 crc kubenswrapper[4751]: I0130 21:52:26.523052 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" Jan 30 21:52:26 crc kubenswrapper[4751]: I0130 21:52:26.643852 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-ssh-key-openstack-edpm-ipam\") pod \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\" (UID: \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\") " Jan 30 21:52:26 crc kubenswrapper[4751]: I0130 21:52:26.644086 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-inventory\") pod \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\" (UID: \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\") " Jan 30 21:52:26 crc kubenswrapper[4751]: I0130 21:52:26.644222 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2ct4\" (UniqueName: \"kubernetes.io/projected/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-kube-api-access-d2ct4\") pod \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\" (UID: \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\") " Jan 30 21:52:26 crc kubenswrapper[4751]: I0130 21:52:26.650690 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-kube-api-access-d2ct4" (OuterVolumeSpecName: "kube-api-access-d2ct4") pod "aa80e137-3a03-4857-9ec0-aa2f9b58df0d" (UID: "aa80e137-3a03-4857-9ec0-aa2f9b58df0d"). InnerVolumeSpecName "kube-api-access-d2ct4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:52:26 crc kubenswrapper[4751]: I0130 21:52:26.676876 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-inventory" (OuterVolumeSpecName: "inventory") pod "aa80e137-3a03-4857-9ec0-aa2f9b58df0d" (UID: "aa80e137-3a03-4857-9ec0-aa2f9b58df0d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:52:26 crc kubenswrapper[4751]: I0130 21:52:26.686027 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "aa80e137-3a03-4857-9ec0-aa2f9b58df0d" (UID: "aa80e137-3a03-4857-9ec0-aa2f9b58df0d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:52:26 crc kubenswrapper[4751]: I0130 21:52:26.746508 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 21:52:26 crc kubenswrapper[4751]: I0130 21:52:26.746542 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2ct4\" (UniqueName: \"kubernetes.io/projected/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-kube-api-access-d2ct4\") on node \"crc\" DevicePath \"\"" Jan 30 21:52:26 crc kubenswrapper[4751]: I0130 21:52:26.746554 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.077442 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xkmv" event={"ID":"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60","Type":"ContainerStarted","Data":"aff0cc54cff4119730216d0b5cde5e979c19e41f766a8033ede4c261939aa562"} Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.079162 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" event={"ID":"aa80e137-3a03-4857-9ec0-aa2f9b58df0d","Type":"ContainerDied","Data":"a9f316a0cd1ae67e29d2d85363c55f6874baca6a681b729bfa02385b2545da1a"} Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.079192 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9f316a0cd1ae67e29d2d85363c55f6874baca6a681b729bfa02385b2545da1a" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.079249 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.124295 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7xkmv" podStartSLOduration=2.583815433 podStartE2EDuration="6.124272867s" podCreationTimestamp="2026-01-30 21:52:21 +0000 UTC" firstStartedPulling="2026-01-30 21:52:23.017068139 +0000 UTC m=+2281.762890788" lastFinishedPulling="2026-01-30 21:52:26.557525573 +0000 UTC m=+2285.303348222" observedRunningTime="2026-01-30 21:52:27.109145284 +0000 UTC m=+2285.854967963" watchObservedRunningTime="2026-01-30 21:52:27.124272867 +0000 UTC m=+2285.870095526" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.181235 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2"] Jan 30 21:52:27 crc kubenswrapper[4751]: E0130 21:52:27.181737 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa80e137-3a03-4857-9ec0-aa2f9b58df0d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.181753 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa80e137-3a03-4857-9ec0-aa2f9b58df0d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.182256 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa80e137-3a03-4857-9ec0-aa2f9b58df0d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.183066 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.186012 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.186152 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.186473 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.186598 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.194734 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2"] Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.258313 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-czgz2\" (UID: \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.258858 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdbt4\" (UniqueName: \"kubernetes.io/projected/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-kube-api-access-vdbt4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-czgz2\" (UID: \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.259252 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-czgz2\" (UID: \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.361242 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-czgz2\" (UID: \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.361456 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-czgz2\" (UID: \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.361489 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdbt4\" (UniqueName: \"kubernetes.io/projected/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-kube-api-access-vdbt4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-czgz2\" (UID: \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.366401 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-czgz2\" (UID: \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.367728 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-czgz2\" (UID: \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.385678 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdbt4\" (UniqueName: \"kubernetes.io/projected/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-kube-api-access-vdbt4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-czgz2\" (UID: \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.541436 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" Jan 30 21:52:28 crc kubenswrapper[4751]: I0130 21:52:28.106522 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2"] Jan 30 21:52:28 crc kubenswrapper[4751]: W0130 21:52:28.110031 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39c3c6d6_5ce3_4522_acc1_1ebbe5748f0d.slice/crio-5c7854022bb4b1a27210ed4d576703ccdd028b1c681e62b72656a04ce9d99072 WatchSource:0}: Error finding container 5c7854022bb4b1a27210ed4d576703ccdd028b1c681e62b72656a04ce9d99072: Status 404 returned error can't find the container with id 5c7854022bb4b1a27210ed4d576703ccdd028b1c681e62b72656a04ce9d99072 Jan 30 21:52:29 crc kubenswrapper[4751]: I0130 21:52:29.100754 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" event={"ID":"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d","Type":"ContainerStarted","Data":"5c41e9203b76ddcd7439688158ffdc87678069a17d135cc32112f263aa9ea41d"} Jan 30 21:52:29 crc kubenswrapper[4751]: I0130 21:52:29.101526 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" event={"ID":"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d","Type":"ContainerStarted","Data":"5c7854022bb4b1a27210ed4d576703ccdd028b1c681e62b72656a04ce9d99072"} Jan 30 21:52:29 crc kubenswrapper[4751]: I0130 21:52:29.129946 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" podStartSLOduration=1.728315925 podStartE2EDuration="2.129890882s" podCreationTimestamp="2026-01-30 21:52:27 +0000 UTC" firstStartedPulling="2026-01-30 21:52:28.11174171 +0000 UTC m=+2286.857564369" lastFinishedPulling="2026-01-30 21:52:28.513316677 +0000 UTC m=+2287.259139326" observedRunningTime="2026-01-30 21:52:29.118787539 +0000 UTC m=+2287.864610198" watchObservedRunningTime="2026-01-30 21:52:29.129890882 +0000 UTC m=+2287.875713541" Jan 30 21:52:31 crc kubenswrapper[4751]: I0130 21:52:31.781239 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:31 crc kubenswrapper[4751]: I0130 21:52:31.781918 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:31 crc kubenswrapper[4751]: I0130 21:52:31.854287 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:32 crc kubenswrapper[4751]: I0130 21:52:32.196531 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:32 crc kubenswrapper[4751]: I0130 21:52:32.247226 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xkmv"] Jan 30 21:52:34 crc kubenswrapper[4751]: I0130 21:52:34.159909 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7xkmv" podUID="7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" containerName="registry-server" containerID="cri-o://aff0cc54cff4119730216d0b5cde5e979c19e41f766a8033ede4c261939aa562" gracePeriod=2 Jan 30 21:52:34 crc kubenswrapper[4751]: I0130 21:52:34.683799 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:34 crc kubenswrapper[4751]: I0130 21:52:34.745165 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfs2m\" (UniqueName: \"kubernetes.io/projected/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-kube-api-access-tfs2m\") pod \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\" (UID: \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\") " Jan 30 21:52:34 crc kubenswrapper[4751]: I0130 21:52:34.745226 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-utilities\") pod \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\" (UID: \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\") " Jan 30 21:52:34 crc kubenswrapper[4751]: I0130 21:52:34.745360 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-catalog-content\") pod \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\" (UID: \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\") " Jan 30 21:52:34 crc kubenswrapper[4751]: I0130 21:52:34.746462 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-utilities" (OuterVolumeSpecName: "utilities") pod "7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" (UID: "7b69623d-0f4c-4ac8-b36d-bd431d5aeb60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:52:34 crc kubenswrapper[4751]: I0130 21:52:34.755105 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-kube-api-access-tfs2m" (OuterVolumeSpecName: "kube-api-access-tfs2m") pod "7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" (UID: "7b69623d-0f4c-4ac8-b36d-bd431d5aeb60"). InnerVolumeSpecName "kube-api-access-tfs2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:52:34 crc kubenswrapper[4751]: I0130 21:52:34.781462 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" (UID: "7b69623d-0f4c-4ac8-b36d-bd431d5aeb60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:52:34 crc kubenswrapper[4751]: I0130 21:52:34.848671 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:52:34 crc kubenswrapper[4751]: I0130 21:52:34.848707 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfs2m\" (UniqueName: \"kubernetes.io/projected/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-kube-api-access-tfs2m\") on node \"crc\" DevicePath \"\"" Jan 30 21:52:34 crc kubenswrapper[4751]: I0130 21:52:34.848720 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.171995 4751 generic.go:334] "Generic (PLEG): container finished" podID="7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" containerID="aff0cc54cff4119730216d0b5cde5e979c19e41f766a8033ede4c261939aa562" exitCode=0 Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.172050 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.172111 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xkmv" event={"ID":"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60","Type":"ContainerDied","Data":"aff0cc54cff4119730216d0b5cde5e979c19e41f766a8033ede4c261939aa562"} Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.172161 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xkmv" event={"ID":"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60","Type":"ContainerDied","Data":"09ba0b000866b341461be8fd8aa4a6998ab86161b69ce1d54a289f04c444e094"} Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.172180 4751 scope.go:117] "RemoveContainer" containerID="aff0cc54cff4119730216d0b5cde5e979c19e41f766a8033ede4c261939aa562" Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.202414 4751 scope.go:117] "RemoveContainer" containerID="2525c163047c82fc8ed0efeed7c11591a753132791c4d3b9128d49e646a268ac" Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.210784 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xkmv"] Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.220633 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xkmv"] Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.231993 4751 scope.go:117] "RemoveContainer" containerID="b131183dd70f412f27fa37decba685afe22b38b2dc2ea3b2dbed009285fff14d" Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.307297 4751 scope.go:117] "RemoveContainer" containerID="aff0cc54cff4119730216d0b5cde5e979c19e41f766a8033ede4c261939aa562" Jan 30 21:52:35 crc kubenswrapper[4751]: E0130 21:52:35.307723 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aff0cc54cff4119730216d0b5cde5e979c19e41f766a8033ede4c261939aa562\": container with ID starting with aff0cc54cff4119730216d0b5cde5e979c19e41f766a8033ede4c261939aa562 not found: ID does not exist" containerID="aff0cc54cff4119730216d0b5cde5e979c19e41f766a8033ede4c261939aa562" Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.307751 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aff0cc54cff4119730216d0b5cde5e979c19e41f766a8033ede4c261939aa562"} err="failed to get container status \"aff0cc54cff4119730216d0b5cde5e979c19e41f766a8033ede4c261939aa562\": rpc error: code = NotFound desc = could not find container \"aff0cc54cff4119730216d0b5cde5e979c19e41f766a8033ede4c261939aa562\": container with ID starting with aff0cc54cff4119730216d0b5cde5e979c19e41f766a8033ede4c261939aa562 not found: ID does not exist" Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.307769 4751 scope.go:117] "RemoveContainer" containerID="2525c163047c82fc8ed0efeed7c11591a753132791c4d3b9128d49e646a268ac" Jan 30 21:52:35 crc kubenswrapper[4751]: E0130 21:52:35.308151 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2525c163047c82fc8ed0efeed7c11591a753132791c4d3b9128d49e646a268ac\": container with ID starting with 2525c163047c82fc8ed0efeed7c11591a753132791c4d3b9128d49e646a268ac not found: ID does not exist" containerID="2525c163047c82fc8ed0efeed7c11591a753132791c4d3b9128d49e646a268ac" Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.308175 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2525c163047c82fc8ed0efeed7c11591a753132791c4d3b9128d49e646a268ac"} err="failed to get container status \"2525c163047c82fc8ed0efeed7c11591a753132791c4d3b9128d49e646a268ac\": rpc error: code = NotFound desc = could not find container \"2525c163047c82fc8ed0efeed7c11591a753132791c4d3b9128d49e646a268ac\": container with ID starting with 2525c163047c82fc8ed0efeed7c11591a753132791c4d3b9128d49e646a268ac not found: ID does not exist" Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.308189 4751 scope.go:117] "RemoveContainer" containerID="b131183dd70f412f27fa37decba685afe22b38b2dc2ea3b2dbed009285fff14d" Jan 30 21:52:35 crc kubenswrapper[4751]: E0130 21:52:35.308491 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b131183dd70f412f27fa37decba685afe22b38b2dc2ea3b2dbed009285fff14d\": container with ID starting with b131183dd70f412f27fa37decba685afe22b38b2dc2ea3b2dbed009285fff14d not found: ID does not exist" containerID="b131183dd70f412f27fa37decba685afe22b38b2dc2ea3b2dbed009285fff14d" Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.308538 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b131183dd70f412f27fa37decba685afe22b38b2dc2ea3b2dbed009285fff14d"} err="failed to get container status \"b131183dd70f412f27fa37decba685afe22b38b2dc2ea3b2dbed009285fff14d\": rpc error: code = NotFound desc = could not find container \"b131183dd70f412f27fa37decba685afe22b38b2dc2ea3b2dbed009285fff14d\": container with ID starting with b131183dd70f412f27fa37decba685afe22b38b2dc2ea3b2dbed009285fff14d not found: ID does not exist" Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.994207 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" path="/var/lib/kubelet/pods/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60/volumes" Jan 30 21:52:36 crc kubenswrapper[4751]: I0130 21:52:36.976462 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:52:36 crc kubenswrapper[4751]: E0130 21:52:36.976967 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:52:51 crc kubenswrapper[4751]: I0130 21:52:51.985266 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:52:51 crc kubenswrapper[4751]: E0130 21:52:51.986394 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:53:06 crc kubenswrapper[4751]: I0130 21:53:06.976394 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:53:06 crc kubenswrapper[4751]: E0130 21:53:06.977388 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:53:11 crc kubenswrapper[4751]: I0130 21:53:11.594410 4751 generic.go:334] "Generic (PLEG): container finished" podID="39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d" containerID="5c41e9203b76ddcd7439688158ffdc87678069a17d135cc32112f263aa9ea41d" exitCode=0 Jan 30 21:53:11 crc kubenswrapper[4751]: I0130 21:53:11.594501 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" event={"ID":"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d","Type":"ContainerDied","Data":"5c41e9203b76ddcd7439688158ffdc87678069a17d135cc32112f263aa9ea41d"} Jan 30 21:53:12 crc kubenswrapper[4751]: I0130 21:53:12.050611 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-xw5xf"] Jan 30 21:53:12 crc kubenswrapper[4751]: I0130 21:53:12.065359 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-xw5xf"] Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.196031 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.389681 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-inventory\") pod \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\" (UID: \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\") " Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.390753 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdbt4\" (UniqueName: \"kubernetes.io/projected/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-kube-api-access-vdbt4\") pod \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\" (UID: \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\") " Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.390797 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-ssh-key-openstack-edpm-ipam\") pod \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\" (UID: \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\") " Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.395872 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-kube-api-access-vdbt4" (OuterVolumeSpecName: "kube-api-access-vdbt4") pod "39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d" (UID: "39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d"). InnerVolumeSpecName "kube-api-access-vdbt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.422856 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-inventory" (OuterVolumeSpecName: "inventory") pod "39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d" (UID: "39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.447624 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d" (UID: "39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.493759 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdbt4\" (UniqueName: \"kubernetes.io/projected/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-kube-api-access-vdbt4\") on node \"crc\" DevicePath \"\"" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.493817 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.493832 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.614556 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" event={"ID":"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d","Type":"ContainerDied","Data":"5c7854022bb4b1a27210ed4d576703ccdd028b1c681e62b72656a04ce9d99072"} Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.614597 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c7854022bb4b1a27210ed4d576703ccdd028b1c681e62b72656a04ce9d99072" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.614687 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.761782 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gtfdq"] Jan 30 21:53:13 crc kubenswrapper[4751]: E0130 21:53:13.762497 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.762521 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 21:53:13 crc kubenswrapper[4751]: E0130 21:53:13.762560 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" containerName="extract-utilities" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.762570 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" containerName="extract-utilities" Jan 30 21:53:13 crc kubenswrapper[4751]: E0130 21:53:13.762587 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" containerName="extract-content" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.762595 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" containerName="extract-content" Jan 30 21:53:13 crc kubenswrapper[4751]: E0130 21:53:13.762606 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" containerName="registry-server" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.762614 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" containerName="registry-server" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.763085 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" containerName="registry-server" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.763141 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.764246 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.766515 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.766829 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.767351 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.767588 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.774631 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gtfdq"] Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.904660 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5g7n\" (UniqueName: \"kubernetes.io/projected/1c9c26ff-407a-4595-8406-e3a0d46450aa-kube-api-access-p5g7n\") pod \"ssh-known-hosts-edpm-deployment-gtfdq\" (UID: \"1c9c26ff-407a-4595-8406-e3a0d46450aa\") " pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.905880 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1c9c26ff-407a-4595-8406-e3a0d46450aa-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gtfdq\" (UID: \"1c9c26ff-407a-4595-8406-e3a0d46450aa\") " pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.906124 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c9c26ff-407a-4595-8406-e3a0d46450aa-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gtfdq\" (UID: \"1c9c26ff-407a-4595-8406-e3a0d46450aa\") " pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" Jan 30 21:53:14 crc kubenswrapper[4751]: I0130 21:53:14.041401 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5g7n\" (UniqueName: \"kubernetes.io/projected/1c9c26ff-407a-4595-8406-e3a0d46450aa-kube-api-access-p5g7n\") pod \"ssh-known-hosts-edpm-deployment-gtfdq\" (UID: \"1c9c26ff-407a-4595-8406-e3a0d46450aa\") " pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" Jan 30 21:53:14 crc kubenswrapper[4751]: I0130 21:53:14.041560 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1c9c26ff-407a-4595-8406-e3a0d46450aa-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gtfdq\" (UID: \"1c9c26ff-407a-4595-8406-e3a0d46450aa\") " pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" Jan 30 21:53:14 crc kubenswrapper[4751]: I0130 21:53:14.041716 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c9c26ff-407a-4595-8406-e3a0d46450aa-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gtfdq\" (UID: \"1c9c26ff-407a-4595-8406-e3a0d46450aa\") " pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" Jan 30 21:53:14 crc kubenswrapper[4751]: I0130 21:53:14.046366 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27b928b3-101e-4649-ae57-9857145062f0" path="/var/lib/kubelet/pods/27b928b3-101e-4649-ae57-9857145062f0/volumes" Jan 30 21:53:14 crc kubenswrapper[4751]: I0130 21:53:14.047800 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c9c26ff-407a-4595-8406-e3a0d46450aa-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gtfdq\" (UID: \"1c9c26ff-407a-4595-8406-e3a0d46450aa\") " pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" Jan 30 21:53:14 crc kubenswrapper[4751]: I0130 21:53:14.047812 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1c9c26ff-407a-4595-8406-e3a0d46450aa-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gtfdq\" (UID: \"1c9c26ff-407a-4595-8406-e3a0d46450aa\") " pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" Jan 30 21:53:14 crc kubenswrapper[4751]: I0130 21:53:14.065222 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5g7n\" (UniqueName: \"kubernetes.io/projected/1c9c26ff-407a-4595-8406-e3a0d46450aa-kube-api-access-p5g7n\") pod \"ssh-known-hosts-edpm-deployment-gtfdq\" (UID: \"1c9c26ff-407a-4595-8406-e3a0d46450aa\") " pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" Jan 30 21:53:14 crc kubenswrapper[4751]: I0130 21:53:14.085305 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" Jan 30 21:53:14 crc kubenswrapper[4751]: I0130 21:53:14.696594 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 21:53:14 crc kubenswrapper[4751]: I0130 21:53:14.706057 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gtfdq"] Jan 30 21:53:15 crc kubenswrapper[4751]: I0130 21:53:15.635447 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" event={"ID":"1c9c26ff-407a-4595-8406-e3a0d46450aa","Type":"ContainerStarted","Data":"7b7c6addc2724d8025fc8ecb6877f8749ca44ae396ec2d88ec6aa78682935da0"} Jan 30 21:53:15 crc kubenswrapper[4751]: I0130 21:53:15.636094 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" event={"ID":"1c9c26ff-407a-4595-8406-e3a0d46450aa","Type":"ContainerStarted","Data":"2c8a69c39dae650df7fb0266ab7ac5aeadc94ff22eeb9c5ce3e0af1e9323b23a"} Jan 30 21:53:15 crc kubenswrapper[4751]: I0130 21:53:15.686932 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" podStartSLOduration=2.065445519 podStartE2EDuration="2.686915456s" podCreationTimestamp="2026-01-30 21:53:13 +0000 UTC" firstStartedPulling="2026-01-30 21:53:14.696314787 +0000 UTC m=+2333.442137436" lastFinishedPulling="2026-01-30 21:53:15.317784724 +0000 UTC m=+2334.063607373" observedRunningTime="2026-01-30 21:53:15.683463972 +0000 UTC m=+2334.429286621" watchObservedRunningTime="2026-01-30 21:53:15.686915456 +0000 UTC m=+2334.432738105" Jan 30 21:53:21 crc kubenswrapper[4751]: I0130 21:53:21.986111 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:53:21 crc kubenswrapper[4751]: E0130 21:53:21.986833 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:53:22 crc kubenswrapper[4751]: I0130 21:53:22.705722 4751 generic.go:334] "Generic (PLEG): container finished" podID="1c9c26ff-407a-4595-8406-e3a0d46450aa" containerID="7b7c6addc2724d8025fc8ecb6877f8749ca44ae396ec2d88ec6aa78682935da0" exitCode=0 Jan 30 21:53:22 crc kubenswrapper[4751]: I0130 21:53:22.705794 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" event={"ID":"1c9c26ff-407a-4595-8406-e3a0d46450aa","Type":"ContainerDied","Data":"7b7c6addc2724d8025fc8ecb6877f8749ca44ae396ec2d88ec6aa78682935da0"} Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.231484 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.397784 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5g7n\" (UniqueName: \"kubernetes.io/projected/1c9c26ff-407a-4595-8406-e3a0d46450aa-kube-api-access-p5g7n\") pod \"1c9c26ff-407a-4595-8406-e3a0d46450aa\" (UID: \"1c9c26ff-407a-4595-8406-e3a0d46450aa\") " Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.398022 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1c9c26ff-407a-4595-8406-e3a0d46450aa-inventory-0\") pod \"1c9c26ff-407a-4595-8406-e3a0d46450aa\" (UID: \"1c9c26ff-407a-4595-8406-e3a0d46450aa\") " Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.398116 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c9c26ff-407a-4595-8406-e3a0d46450aa-ssh-key-openstack-edpm-ipam\") pod \"1c9c26ff-407a-4595-8406-e3a0d46450aa\" (UID: \"1c9c26ff-407a-4595-8406-e3a0d46450aa\") " Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.407727 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c9c26ff-407a-4595-8406-e3a0d46450aa-kube-api-access-p5g7n" (OuterVolumeSpecName: "kube-api-access-p5g7n") pod "1c9c26ff-407a-4595-8406-e3a0d46450aa" (UID: "1c9c26ff-407a-4595-8406-e3a0d46450aa"). InnerVolumeSpecName "kube-api-access-p5g7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.474854 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c9c26ff-407a-4595-8406-e3a0d46450aa-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1c9c26ff-407a-4595-8406-e3a0d46450aa" (UID: "1c9c26ff-407a-4595-8406-e3a0d46450aa"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.476181 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c9c26ff-407a-4595-8406-e3a0d46450aa-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "1c9c26ff-407a-4595-8406-e3a0d46450aa" (UID: "1c9c26ff-407a-4595-8406-e3a0d46450aa"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.503905 4751 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1c9c26ff-407a-4595-8406-e3a0d46450aa-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.503973 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c9c26ff-407a-4595-8406-e3a0d46450aa-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.503991 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5g7n\" (UniqueName: \"kubernetes.io/projected/1c9c26ff-407a-4595-8406-e3a0d46450aa-kube-api-access-p5g7n\") on node \"crc\" DevicePath \"\"" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.729467 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" event={"ID":"1c9c26ff-407a-4595-8406-e3a0d46450aa","Type":"ContainerDied","Data":"2c8a69c39dae650df7fb0266ab7ac5aeadc94ff22eeb9c5ce3e0af1e9323b23a"} Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.729512 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c8a69c39dae650df7fb0266ab7ac5aeadc94ff22eeb9c5ce3e0af1e9323b23a" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.729582 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.836454 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg"] Jan 30 21:53:24 crc kubenswrapper[4751]: E0130 21:53:24.837062 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c9c26ff-407a-4595-8406-e3a0d46450aa" containerName="ssh-known-hosts-edpm-deployment" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.837084 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c9c26ff-407a-4595-8406-e3a0d46450aa" containerName="ssh-known-hosts-edpm-deployment" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.837372 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c9c26ff-407a-4595-8406-e3a0d46450aa" containerName="ssh-known-hosts-edpm-deployment" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.838496 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.841140 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.852795 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.853061 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.853953 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.879448 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg"] Jan 30 21:53:25 crc kubenswrapper[4751]: I0130 21:53:25.022789 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10f27009-b34c-43f0-999f-64c2e2316013-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zbttg\" (UID: \"10f27009-b34c-43f0-999f-64c2e2316013\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" Jan 30 21:53:25 crc kubenswrapper[4751]: I0130 21:53:25.023211 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10f27009-b34c-43f0-999f-64c2e2316013-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zbttg\" (UID: \"10f27009-b34c-43f0-999f-64c2e2316013\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" Jan 30 21:53:25 crc kubenswrapper[4751]: I0130 21:53:25.023292 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ttsv\" (UniqueName: \"kubernetes.io/projected/10f27009-b34c-43f0-999f-64c2e2316013-kube-api-access-8ttsv\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zbttg\" (UID: \"10f27009-b34c-43f0-999f-64c2e2316013\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" Jan 30 21:53:25 crc kubenswrapper[4751]: I0130 21:53:25.131422 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10f27009-b34c-43f0-999f-64c2e2316013-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zbttg\" (UID: \"10f27009-b34c-43f0-999f-64c2e2316013\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" Jan 30 21:53:25 crc kubenswrapper[4751]: I0130 21:53:25.131522 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ttsv\" (UniqueName: \"kubernetes.io/projected/10f27009-b34c-43f0-999f-64c2e2316013-kube-api-access-8ttsv\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zbttg\" (UID: \"10f27009-b34c-43f0-999f-64c2e2316013\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" Jan 30 21:53:25 crc kubenswrapper[4751]: I0130 21:53:25.131646 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10f27009-b34c-43f0-999f-64c2e2316013-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zbttg\" (UID: \"10f27009-b34c-43f0-999f-64c2e2316013\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" Jan 30 21:53:25 crc kubenswrapper[4751]: I0130 21:53:25.159107 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10f27009-b34c-43f0-999f-64c2e2316013-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zbttg\" (UID: \"10f27009-b34c-43f0-999f-64c2e2316013\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" Jan 30 21:53:25 crc kubenswrapper[4751]: I0130 21:53:25.171832 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ttsv\" (UniqueName: \"kubernetes.io/projected/10f27009-b34c-43f0-999f-64c2e2316013-kube-api-access-8ttsv\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zbttg\" (UID: \"10f27009-b34c-43f0-999f-64c2e2316013\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" Jan 30 21:53:25 crc kubenswrapper[4751]: I0130 21:53:25.182904 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10f27009-b34c-43f0-999f-64c2e2316013-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zbttg\" (UID: \"10f27009-b34c-43f0-999f-64c2e2316013\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" Jan 30 21:53:25 crc kubenswrapper[4751]: I0130 21:53:25.219095 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" Jan 30 21:53:25 crc kubenswrapper[4751]: I0130 21:53:25.778220 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg"] Jan 30 21:53:26 crc kubenswrapper[4751]: I0130 21:53:26.760759 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" event={"ID":"10f27009-b34c-43f0-999f-64c2e2316013","Type":"ContainerStarted","Data":"122679292cf1494c9eae6f6aa11d5714e9748f977c1cf888cae1244904e992f0"} Jan 30 21:53:27 crc kubenswrapper[4751]: I0130 21:53:27.311115 4751 scope.go:117] "RemoveContainer" containerID="74af78e3c804e6dbf95f30a1b6c4ba765fc8edb69bfb20dd7f1176259283a952" Jan 30 21:53:27 crc kubenswrapper[4751]: I0130 21:53:27.770566 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" event={"ID":"10f27009-b34c-43f0-999f-64c2e2316013","Type":"ContainerStarted","Data":"0aaf8c7f914e1332e2b97e1f8cbfc4564336f0832a85b0e56b76e043fcaf10b9"} Jan 30 21:53:27 crc kubenswrapper[4751]: I0130 21:53:27.802142 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" podStartSLOduration=3.109678883 podStartE2EDuration="3.802121256s" podCreationTimestamp="2026-01-30 21:53:24 +0000 UTC" firstStartedPulling="2026-01-30 21:53:25.775311783 +0000 UTC m=+2344.521134432" lastFinishedPulling="2026-01-30 21:53:26.467754146 +0000 UTC m=+2345.213576805" observedRunningTime="2026-01-30 21:53:27.799588967 +0000 UTC m=+2346.545411626" watchObservedRunningTime="2026-01-30 21:53:27.802121256 +0000 UTC m=+2346.547943905" Jan 30 21:53:34 crc kubenswrapper[4751]: I0130 21:53:34.842670 4751 generic.go:334] "Generic (PLEG): container finished" podID="10f27009-b34c-43f0-999f-64c2e2316013" containerID="0aaf8c7f914e1332e2b97e1f8cbfc4564336f0832a85b0e56b76e043fcaf10b9" exitCode=0 Jan 30 21:53:34 crc kubenswrapper[4751]: I0130 21:53:34.842913 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" event={"ID":"10f27009-b34c-43f0-999f-64c2e2316013","Type":"ContainerDied","Data":"0aaf8c7f914e1332e2b97e1f8cbfc4564336f0832a85b0e56b76e043fcaf10b9"} Jan 30 21:53:34 crc kubenswrapper[4751]: I0130 21:53:34.975656 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:53:34 crc kubenswrapper[4751]: E0130 21:53:34.975952 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.394671 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.518693 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10f27009-b34c-43f0-999f-64c2e2316013-ssh-key-openstack-edpm-ipam\") pod \"10f27009-b34c-43f0-999f-64c2e2316013\" (UID: \"10f27009-b34c-43f0-999f-64c2e2316013\") " Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.518753 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ttsv\" (UniqueName: \"kubernetes.io/projected/10f27009-b34c-43f0-999f-64c2e2316013-kube-api-access-8ttsv\") pod \"10f27009-b34c-43f0-999f-64c2e2316013\" (UID: \"10f27009-b34c-43f0-999f-64c2e2316013\") " Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.518909 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10f27009-b34c-43f0-999f-64c2e2316013-inventory\") pod \"10f27009-b34c-43f0-999f-64c2e2316013\" (UID: \"10f27009-b34c-43f0-999f-64c2e2316013\") " Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.524892 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10f27009-b34c-43f0-999f-64c2e2316013-kube-api-access-8ttsv" (OuterVolumeSpecName: "kube-api-access-8ttsv") pod "10f27009-b34c-43f0-999f-64c2e2316013" (UID: "10f27009-b34c-43f0-999f-64c2e2316013"). InnerVolumeSpecName "kube-api-access-8ttsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.554625 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f27009-b34c-43f0-999f-64c2e2316013-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "10f27009-b34c-43f0-999f-64c2e2316013" (UID: "10f27009-b34c-43f0-999f-64c2e2316013"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.560186 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f27009-b34c-43f0-999f-64c2e2316013-inventory" (OuterVolumeSpecName: "inventory") pod "10f27009-b34c-43f0-999f-64c2e2316013" (UID: "10f27009-b34c-43f0-999f-64c2e2316013"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.621875 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10f27009-b34c-43f0-999f-64c2e2316013-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.621923 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10f27009-b34c-43f0-999f-64c2e2316013-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.621937 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ttsv\" (UniqueName: \"kubernetes.io/projected/10f27009-b34c-43f0-999f-64c2e2316013-kube-api-access-8ttsv\") on node \"crc\" DevicePath \"\"" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.864361 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" event={"ID":"10f27009-b34c-43f0-999f-64c2e2316013","Type":"ContainerDied","Data":"122679292cf1494c9eae6f6aa11d5714e9748f977c1cf888cae1244904e992f0"} Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.864716 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="122679292cf1494c9eae6f6aa11d5714e9748f977c1cf888cae1244904e992f0" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.864774 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.964171 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp"] Jan 30 21:53:36 crc kubenswrapper[4751]: E0130 21:53:36.965222 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10f27009-b34c-43f0-999f-64c2e2316013" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.965250 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="10f27009-b34c-43f0-999f-64c2e2316013" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.965591 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="10f27009-b34c-43f0-999f-64c2e2316013" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.966663 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.969779 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.970064 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.970253 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.970443 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.980947 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp"] Jan 30 21:53:37 crc kubenswrapper[4751]: I0130 21:53:37.036517 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0562f716-fdf2-41ff-bb36-5474fa9be5c0-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp\" (UID: \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" Jan 30 21:53:37 crc kubenswrapper[4751]: I0130 21:53:37.036593 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fn2c\" (UniqueName: \"kubernetes.io/projected/0562f716-fdf2-41ff-bb36-5474fa9be5c0-kube-api-access-2fn2c\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp\" (UID: \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" Jan 30 21:53:37 crc kubenswrapper[4751]: I0130 21:53:37.036660 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0562f716-fdf2-41ff-bb36-5474fa9be5c0-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp\" (UID: \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" Jan 30 21:53:37 crc kubenswrapper[4751]: I0130 21:53:37.139555 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0562f716-fdf2-41ff-bb36-5474fa9be5c0-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp\" (UID: \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" Jan 30 21:53:37 crc kubenswrapper[4751]: I0130 21:53:37.139751 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0562f716-fdf2-41ff-bb36-5474fa9be5c0-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp\" (UID: \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" Jan 30 21:53:37 crc kubenswrapper[4751]: I0130 21:53:37.139830 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fn2c\" (UniqueName: \"kubernetes.io/projected/0562f716-fdf2-41ff-bb36-5474fa9be5c0-kube-api-access-2fn2c\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp\" (UID: \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" Jan 30 21:53:37 crc kubenswrapper[4751]: I0130 21:53:37.146455 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0562f716-fdf2-41ff-bb36-5474fa9be5c0-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp\" (UID: \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" Jan 30 21:53:37 crc kubenswrapper[4751]: I0130 21:53:37.149832 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0562f716-fdf2-41ff-bb36-5474fa9be5c0-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp\" (UID: \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" Jan 30 21:53:37 crc kubenswrapper[4751]: I0130 21:53:37.160370 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fn2c\" (UniqueName: \"kubernetes.io/projected/0562f716-fdf2-41ff-bb36-5474fa9be5c0-kube-api-access-2fn2c\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp\" (UID: \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" Jan 30 21:53:37 crc kubenswrapper[4751]: I0130 21:53:37.308768 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" Jan 30 21:53:37 crc kubenswrapper[4751]: I0130 21:53:37.912377 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp"] Jan 30 21:53:38 crc kubenswrapper[4751]: I0130 21:53:38.888878 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" event={"ID":"0562f716-fdf2-41ff-bb36-5474fa9be5c0","Type":"ContainerStarted","Data":"bc7cd9d279a5b97e5c4074451d6efae3f0299f971bed67c8ea07ffc3c0342544"} Jan 30 21:53:38 crc kubenswrapper[4751]: I0130 21:53:38.889406 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" event={"ID":"0562f716-fdf2-41ff-bb36-5474fa9be5c0","Type":"ContainerStarted","Data":"3d5a4553e4e6aa626cca117c84e026f409e5c2220a8c3d21496a40d5224c6c8a"} Jan 30 21:53:38 crc kubenswrapper[4751]: I0130 21:53:38.915207 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" podStartSLOduration=2.482883207 podStartE2EDuration="2.915183392s" podCreationTimestamp="2026-01-30 21:53:36 +0000 UTC" firstStartedPulling="2026-01-30 21:53:37.921796017 +0000 UTC m=+2356.667618666" lastFinishedPulling="2026-01-30 21:53:38.354096202 +0000 UTC m=+2357.099918851" observedRunningTime="2026-01-30 21:53:38.906610427 +0000 UTC m=+2357.652433086" watchObservedRunningTime="2026-01-30 21:53:38.915183392 +0000 UTC m=+2357.661006041" Jan 30 21:53:47 crc kubenswrapper[4751]: I0130 21:53:47.421433 4751 generic.go:334] "Generic (PLEG): container finished" podID="0562f716-fdf2-41ff-bb36-5474fa9be5c0" containerID="bc7cd9d279a5b97e5c4074451d6efae3f0299f971bed67c8ea07ffc3c0342544" exitCode=0 Jan 30 21:53:47 crc kubenswrapper[4751]: I0130 21:53:47.421519 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" event={"ID":"0562f716-fdf2-41ff-bb36-5474fa9be5c0","Type":"ContainerDied","Data":"bc7cd9d279a5b97e5c4074451d6efae3f0299f971bed67c8ea07ffc3c0342544"} Jan 30 21:53:47 crc kubenswrapper[4751]: I0130 21:53:47.976297 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:53:47 crc kubenswrapper[4751]: E0130 21:53:47.976700 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:53:48 crc kubenswrapper[4751]: I0130 21:53:48.936975 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.127283 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0562f716-fdf2-41ff-bb36-5474fa9be5c0-ssh-key-openstack-edpm-ipam\") pod \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\" (UID: \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\") " Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.127411 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fn2c\" (UniqueName: \"kubernetes.io/projected/0562f716-fdf2-41ff-bb36-5474fa9be5c0-kube-api-access-2fn2c\") pod \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\" (UID: \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\") " Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.127451 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0562f716-fdf2-41ff-bb36-5474fa9be5c0-inventory\") pod \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\" (UID: \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\") " Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.133687 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0562f716-fdf2-41ff-bb36-5474fa9be5c0-kube-api-access-2fn2c" (OuterVolumeSpecName: "kube-api-access-2fn2c") pod "0562f716-fdf2-41ff-bb36-5474fa9be5c0" (UID: "0562f716-fdf2-41ff-bb36-5474fa9be5c0"). InnerVolumeSpecName "kube-api-access-2fn2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.159262 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0562f716-fdf2-41ff-bb36-5474fa9be5c0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0562f716-fdf2-41ff-bb36-5474fa9be5c0" (UID: "0562f716-fdf2-41ff-bb36-5474fa9be5c0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.186310 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0562f716-fdf2-41ff-bb36-5474fa9be5c0-inventory" (OuterVolumeSpecName: "inventory") pod "0562f716-fdf2-41ff-bb36-5474fa9be5c0" (UID: "0562f716-fdf2-41ff-bb36-5474fa9be5c0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.232087 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0562f716-fdf2-41ff-bb36-5474fa9be5c0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.232417 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fn2c\" (UniqueName: \"kubernetes.io/projected/0562f716-fdf2-41ff-bb36-5474fa9be5c0-kube-api-access-2fn2c\") on node \"crc\" DevicePath \"\"" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.232429 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0562f716-fdf2-41ff-bb36-5474fa9be5c0-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.445313 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" event={"ID":"0562f716-fdf2-41ff-bb36-5474fa9be5c0","Type":"ContainerDied","Data":"3d5a4553e4e6aa626cca117c84e026f409e5c2220a8c3d21496a40d5224c6c8a"} Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.445382 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d5a4553e4e6aa626cca117c84e026f409e5c2220a8c3d21496a40d5224c6c8a" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.445391 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.553368 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d"] Jan 30 21:53:49 crc kubenswrapper[4751]: E0130 21:53:49.553882 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0562f716-fdf2-41ff-bb36-5474fa9be5c0" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.553901 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="0562f716-fdf2-41ff-bb36-5474fa9be5c0" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.554169 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="0562f716-fdf2-41ff-bb36-5474fa9be5c0" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.555099 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.560152 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.561237 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.561309 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.561243 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.561633 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.561740 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.561796 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.561741 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.561939 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.574075 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d"] Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.745902 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.745970 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.746021 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.746059 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.746096 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.746207 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.746384 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.746432 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.746536 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcpwb\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-kube-api-access-lcpwb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.746856 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.746915 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.747022 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.747097 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.747123 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.747197 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.747215 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.849670 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.849739 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.849769 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.849797 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.849829 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.849870 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.849891 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.849925 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcpwb\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-kube-api-access-lcpwb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.850013 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.850034 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.850063 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.850088 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.850107 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.850136 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.850156 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.850222 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.860357 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.865191 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.866521 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.875293 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.876117 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.876734 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.876821 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.876985 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.877722 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.877835 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.877929 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.879710 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.882879 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcpwb\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-kube-api-access-lcpwb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.883352 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.883875 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.884168 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:50 crc kubenswrapper[4751]: I0130 21:53:50.176505 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:50 crc kubenswrapper[4751]: I0130 21:53:50.797934 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d"] Jan 30 21:53:51 crc kubenswrapper[4751]: I0130 21:53:51.467226 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" event={"ID":"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f","Type":"ContainerStarted","Data":"7a07c675c0f90c574d8f8f8ea8ef47927ddc4c08de96eca070ff2595e227a85a"} Jan 30 21:53:52 crc kubenswrapper[4751]: I0130 21:53:52.478402 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" event={"ID":"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f","Type":"ContainerStarted","Data":"672587b14b320f08f5e42b1ec7ff59ed42e125c4def3cc11b0df4a7d866c6afc"} Jan 30 21:53:52 crc kubenswrapper[4751]: I0130 21:53:52.506876 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" podStartSLOduration=2.40199318 podStartE2EDuration="3.506857458s" podCreationTimestamp="2026-01-30 21:53:49 +0000 UTC" firstStartedPulling="2026-01-30 21:53:50.814600384 +0000 UTC m=+2369.560423033" lastFinishedPulling="2026-01-30 21:53:51.919464662 +0000 UTC m=+2370.665287311" observedRunningTime="2026-01-30 21:53:52.499017124 +0000 UTC m=+2371.244839793" watchObservedRunningTime="2026-01-30 21:53:52.506857458 +0000 UTC m=+2371.252680107" Jan 30 21:53:59 crc kubenswrapper[4751]: I0130 21:53:59.055812 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-q9ws6"] Jan 30 21:53:59 crc kubenswrapper[4751]: I0130 21:53:59.068095 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-q9ws6"] Jan 30 21:54:00 crc kubenswrapper[4751]: I0130 21:54:00.004896 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee22e47a-e31f-4d01-8eec-e4d24dbb02ca" path="/var/lib/kubelet/pods/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca/volumes" Jan 30 21:54:02 crc kubenswrapper[4751]: I0130 21:54:02.975584 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:54:02 crc kubenswrapper[4751]: E0130 21:54:02.976438 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:54:15 crc kubenswrapper[4751]: I0130 21:54:15.980961 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:54:15 crc kubenswrapper[4751]: E0130 21:54:15.982198 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:54:27 crc kubenswrapper[4751]: I0130 21:54:27.395503 4751 scope.go:117] "RemoveContainer" containerID="e68cf53ba13bd45baafd16d7ceca811457154cd522453b22e57f6a2054d3b023" Jan 30 21:54:29 crc kubenswrapper[4751]: I0130 21:54:29.976959 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:54:29 crc kubenswrapper[4751]: E0130 21:54:29.978414 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:54:33 crc kubenswrapper[4751]: I0130 21:54:33.948517 4751 generic.go:334] "Generic (PLEG): container finished" podID="d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" containerID="672587b14b320f08f5e42b1ec7ff59ed42e125c4def3cc11b0df4a7d866c6afc" exitCode=0 Jan 30 21:54:33 crc kubenswrapper[4751]: I0130 21:54:33.948608 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" event={"ID":"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f","Type":"ContainerDied","Data":"672587b14b320f08f5e42b1ec7ff59ed42e125c4def3cc11b0df4a7d866c6afc"} Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.491602 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.533487 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.533565 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-libvirt-combined-ca-bundle\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.533610 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.533637 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-repo-setup-combined-ca-bundle\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.533739 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.533776 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-ovn-combined-ca-bundle\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.533825 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-nova-combined-ca-bundle\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.533882 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-ssh-key-openstack-edpm-ipam\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.533918 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-neutron-metadata-combined-ca-bundle\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.533945 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.533981 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-inventory\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.534023 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-bootstrap-combined-ca-bundle\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.534106 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-telemetry-combined-ca-bundle\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.534142 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.534166 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcpwb\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-kube-api-access-lcpwb\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.534225 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-telemetry-power-monitoring-combined-ca-bundle\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.543230 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.544696 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.544772 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.544811 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.544715 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.545070 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.546245 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.547915 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.548679 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.548707 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.554096 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.554565 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.561148 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.570982 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-kube-api-access-lcpwb" (OuterVolumeSpecName: "kube-api-access-lcpwb") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "kube-api-access-lcpwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.578917 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.580168 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-inventory" (OuterVolumeSpecName: "inventory") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646475 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646516 4751 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646533 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646546 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646565 4751 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646580 4751 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646597 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646610 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcpwb\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-kube-api-access-lcpwb\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646624 4751 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646637 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646650 4751 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646664 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646677 4751 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646694 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646707 4751 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646719 4751 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.015136 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.018157 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" event={"ID":"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f","Type":"ContainerDied","Data":"7a07c675c0f90c574d8f8f8ea8ef47927ddc4c08de96eca070ff2595e227a85a"} Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.018212 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a07c675c0f90c574d8f8f8ea8ef47927ddc4c08de96eca070ff2595e227a85a" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.162609 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98"] Jan 30 21:54:36 crc kubenswrapper[4751]: E0130 21:54:36.163103 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.163120 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.163358 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.165004 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.171126 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.171390 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.171497 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.171596 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.171706 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.200177 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98"] Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.261418 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.261462 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.261503 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrfn4\" (UniqueName: \"kubernetes.io/projected/43548d7f-01a0-4905-a26d-424ba948cbe8-kube-api-access-vrfn4\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.261540 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.261590 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/43548d7f-01a0-4905-a26d-424ba948cbe8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.364489 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/43548d7f-01a0-4905-a26d-424ba948cbe8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.364758 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.364795 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.364846 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrfn4\" (UniqueName: \"kubernetes.io/projected/43548d7f-01a0-4905-a26d-424ba948cbe8-kube-api-access-vrfn4\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.364880 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.370313 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/43548d7f-01a0-4905-a26d-424ba948cbe8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.371198 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.371645 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.372259 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.389810 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrfn4\" (UniqueName: \"kubernetes.io/projected/43548d7f-01a0-4905-a26d-424ba948cbe8-kube-api-access-vrfn4\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.495059 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:37 crc kubenswrapper[4751]: I0130 21:54:37.897753 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98"] Jan 30 21:54:37 crc kubenswrapper[4751]: W0130 21:54:37.899469 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43548d7f_01a0_4905_a26d_424ba948cbe8.slice/crio-49af737d44eab3ca3b2a29843774a9cbe165e71fec85b195d68cb91b4cf97ca1 WatchSource:0}: Error finding container 49af737d44eab3ca3b2a29843774a9cbe165e71fec85b195d68cb91b4cf97ca1: Status 404 returned error can't find the container with id 49af737d44eab3ca3b2a29843774a9cbe165e71fec85b195d68cb91b4cf97ca1 Jan 30 21:54:38 crc kubenswrapper[4751]: I0130 21:54:38.043443 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" event={"ID":"43548d7f-01a0-4905-a26d-424ba948cbe8","Type":"ContainerStarted","Data":"49af737d44eab3ca3b2a29843774a9cbe165e71fec85b195d68cb91b4cf97ca1"} Jan 30 21:54:39 crc kubenswrapper[4751]: I0130 21:54:39.054271 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" event={"ID":"43548d7f-01a0-4905-a26d-424ba948cbe8","Type":"ContainerStarted","Data":"d63111cd905526ebb8600297ca72fc453aba6ae767a18220cb215df78ce120c4"} Jan 30 21:54:39 crc kubenswrapper[4751]: I0130 21:54:39.076422 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" podStartSLOduration=2.578880099 podStartE2EDuration="3.076403534s" podCreationTimestamp="2026-01-30 21:54:36 +0000 UTC" firstStartedPulling="2026-01-30 21:54:37.901862076 +0000 UTC m=+2416.647684725" lastFinishedPulling="2026-01-30 21:54:38.399385501 +0000 UTC m=+2417.145208160" observedRunningTime="2026-01-30 21:54:39.073754262 +0000 UTC m=+2417.819576911" watchObservedRunningTime="2026-01-30 21:54:39.076403534 +0000 UTC m=+2417.822226183" Jan 30 21:54:43 crc kubenswrapper[4751]: I0130 21:54:43.976396 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:54:43 crc kubenswrapper[4751]: E0130 21:54:43.977122 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:54:54 crc kubenswrapper[4751]: I0130 21:54:54.975929 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:54:54 crc kubenswrapper[4751]: E0130 21:54:54.976739 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:55:05 crc kubenswrapper[4751]: I0130 21:55:05.975670 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:55:05 crc kubenswrapper[4751]: E0130 21:55:05.976567 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:55:17 crc kubenswrapper[4751]: I0130 21:55:17.978362 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:55:17 crc kubenswrapper[4751]: E0130 21:55:17.980665 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:55:30 crc kubenswrapper[4751]: I0130 21:55:30.976990 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:55:30 crc kubenswrapper[4751]: E0130 21:55:30.977918 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:55:35 crc kubenswrapper[4751]: I0130 21:55:35.703124 4751 generic.go:334] "Generic (PLEG): container finished" podID="43548d7f-01a0-4905-a26d-424ba948cbe8" containerID="d63111cd905526ebb8600297ca72fc453aba6ae767a18220cb215df78ce120c4" exitCode=0 Jan 30 21:55:35 crc kubenswrapper[4751]: I0130 21:55:35.703214 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" event={"ID":"43548d7f-01a0-4905-a26d-424ba948cbe8","Type":"ContainerDied","Data":"d63111cd905526ebb8600297ca72fc453aba6ae767a18220cb215df78ce120c4"} Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.293583 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.416025 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-ssh-key-openstack-edpm-ipam\") pod \"43548d7f-01a0-4905-a26d-424ba948cbe8\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.416312 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-ovn-combined-ca-bundle\") pod \"43548d7f-01a0-4905-a26d-424ba948cbe8\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.416398 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/43548d7f-01a0-4905-a26d-424ba948cbe8-ovncontroller-config-0\") pod \"43548d7f-01a0-4905-a26d-424ba948cbe8\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.416427 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-inventory\") pod \"43548d7f-01a0-4905-a26d-424ba948cbe8\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.416722 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrfn4\" (UniqueName: \"kubernetes.io/projected/43548d7f-01a0-4905-a26d-424ba948cbe8-kube-api-access-vrfn4\") pod \"43548d7f-01a0-4905-a26d-424ba948cbe8\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.423984 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "43548d7f-01a0-4905-a26d-424ba948cbe8" (UID: "43548d7f-01a0-4905-a26d-424ba948cbe8"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.424168 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43548d7f-01a0-4905-a26d-424ba948cbe8-kube-api-access-vrfn4" (OuterVolumeSpecName: "kube-api-access-vrfn4") pod "43548d7f-01a0-4905-a26d-424ba948cbe8" (UID: "43548d7f-01a0-4905-a26d-424ba948cbe8"). InnerVolumeSpecName "kube-api-access-vrfn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.453863 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "43548d7f-01a0-4905-a26d-424ba948cbe8" (UID: "43548d7f-01a0-4905-a26d-424ba948cbe8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.483346 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43548d7f-01a0-4905-a26d-424ba948cbe8-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "43548d7f-01a0-4905-a26d-424ba948cbe8" (UID: "43548d7f-01a0-4905-a26d-424ba948cbe8"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.489683 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-inventory" (OuterVolumeSpecName: "inventory") pod "43548d7f-01a0-4905-a26d-424ba948cbe8" (UID: "43548d7f-01a0-4905-a26d-424ba948cbe8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.520180 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.520222 4751 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.520236 4751 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/43548d7f-01a0-4905-a26d-424ba948cbe8-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.520250 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.520263 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrfn4\" (UniqueName: \"kubernetes.io/projected/43548d7f-01a0-4905-a26d-424ba948cbe8-kube-api-access-vrfn4\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.737617 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" event={"ID":"43548d7f-01a0-4905-a26d-424ba948cbe8","Type":"ContainerDied","Data":"49af737d44eab3ca3b2a29843774a9cbe165e71fec85b195d68cb91b4cf97ca1"} Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.737679 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49af737d44eab3ca3b2a29843774a9cbe165e71fec85b195d68cb91b4cf97ca1" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.737697 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.848308 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj"] Jan 30 21:55:37 crc kubenswrapper[4751]: E0130 21:55:37.849043 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43548d7f-01a0-4905-a26d-424ba948cbe8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.849078 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="43548d7f-01a0-4905-a26d-424ba948cbe8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.849458 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="43548d7f-01a0-4905-a26d-424ba948cbe8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.850418 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.857192 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.857205 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.864503 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj"] Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.869592 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.869840 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.870013 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.870141 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.934757 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glmq5\" (UniqueName: \"kubernetes.io/projected/9d2edd75-7066-43c1-9636-149a176ee575-kube-api-access-glmq5\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.934802 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.934831 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.934864 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.934894 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.935032 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.037067 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.037273 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.037619 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glmq5\" (UniqueName: \"kubernetes.io/projected/9d2edd75-7066-43c1-9636-149a176ee575-kube-api-access-glmq5\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.037683 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.037730 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.037765 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.042924 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.043862 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.044217 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.044278 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.045721 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.059282 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glmq5\" (UniqueName: \"kubernetes.io/projected/9d2edd75-7066-43c1-9636-149a176ee575-kube-api-access-glmq5\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.181176 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.743767 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj"] Jan 30 21:55:39 crc kubenswrapper[4751]: I0130 21:55:39.769413 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" event={"ID":"9d2edd75-7066-43c1-9636-149a176ee575","Type":"ContainerStarted","Data":"132f59865db02acd7bac90d85d3082a57a9cd620316d618a496f465ceb78253e"} Jan 30 21:55:39 crc kubenswrapper[4751]: I0130 21:55:39.770155 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" event={"ID":"9d2edd75-7066-43c1-9636-149a176ee575","Type":"ContainerStarted","Data":"e2cb7bf89450e8f13b0ebe7b4c045c23a0242d2fc08f918636cc9c91032b6b5c"} Jan 30 21:55:39 crc kubenswrapper[4751]: I0130 21:55:39.791402 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" podStartSLOduration=2.267633991 podStartE2EDuration="2.79137947s" podCreationTimestamp="2026-01-30 21:55:37 +0000 UTC" firstStartedPulling="2026-01-30 21:55:38.753475465 +0000 UTC m=+2477.499298114" lastFinishedPulling="2026-01-30 21:55:39.277220934 +0000 UTC m=+2478.023043593" observedRunningTime="2026-01-30 21:55:39.784184364 +0000 UTC m=+2478.530007013" watchObservedRunningTime="2026-01-30 21:55:39.79137947 +0000 UTC m=+2478.537202129" Jan 30 21:55:42 crc kubenswrapper[4751]: I0130 21:55:42.975764 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:55:42 crc kubenswrapper[4751]: E0130 21:55:42.976796 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:55:55 crc kubenswrapper[4751]: I0130 21:55:55.976256 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:55:56 crc kubenswrapper[4751]: I0130 21:55:56.981723 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"72b97ad134c710248fa542e7f2b4ac03f85859885b9de5fb88903a4ed9925d19"} Jan 30 21:56:24 crc kubenswrapper[4751]: I0130 21:56:24.278537 4751 generic.go:334] "Generic (PLEG): container finished" podID="9d2edd75-7066-43c1-9636-149a176ee575" containerID="132f59865db02acd7bac90d85d3082a57a9cd620316d618a496f465ceb78253e" exitCode=0 Jan 30 21:56:24 crc kubenswrapper[4751]: I0130 21:56:24.278635 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" event={"ID":"9d2edd75-7066-43c1-9636-149a176ee575","Type":"ContainerDied","Data":"132f59865db02acd7bac90d85d3082a57a9cd620316d618a496f465ceb78253e"} Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.826757 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.891179 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glmq5\" (UniqueName: \"kubernetes.io/projected/9d2edd75-7066-43c1-9636-149a176ee575-kube-api-access-glmq5\") pod \"9d2edd75-7066-43c1-9636-149a176ee575\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.891239 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-ssh-key-openstack-edpm-ipam\") pod \"9d2edd75-7066-43c1-9636-149a176ee575\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.891412 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-neutron-metadata-combined-ca-bundle\") pod \"9d2edd75-7066-43c1-9636-149a176ee575\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.891445 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-nova-metadata-neutron-config-0\") pod \"9d2edd75-7066-43c1-9636-149a176ee575\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.891496 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-neutron-ovn-metadata-agent-neutron-config-0\") pod \"9d2edd75-7066-43c1-9636-149a176ee575\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.891532 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-inventory\") pod \"9d2edd75-7066-43c1-9636-149a176ee575\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.900986 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "9d2edd75-7066-43c1-9636-149a176ee575" (UID: "9d2edd75-7066-43c1-9636-149a176ee575"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.903544 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d2edd75-7066-43c1-9636-149a176ee575-kube-api-access-glmq5" (OuterVolumeSpecName: "kube-api-access-glmq5") pod "9d2edd75-7066-43c1-9636-149a176ee575" (UID: "9d2edd75-7066-43c1-9636-149a176ee575"). InnerVolumeSpecName "kube-api-access-glmq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.929596 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "9d2edd75-7066-43c1-9636-149a176ee575" (UID: "9d2edd75-7066-43c1-9636-149a176ee575"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.933469 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "9d2edd75-7066-43c1-9636-149a176ee575" (UID: "9d2edd75-7066-43c1-9636-149a176ee575"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.936888 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-inventory" (OuterVolumeSpecName: "inventory") pod "9d2edd75-7066-43c1-9636-149a176ee575" (UID: "9d2edd75-7066-43c1-9636-149a176ee575"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.955934 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9d2edd75-7066-43c1-9636-149a176ee575" (UID: "9d2edd75-7066-43c1-9636-149a176ee575"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.994481 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.994513 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glmq5\" (UniqueName: \"kubernetes.io/projected/9d2edd75-7066-43c1-9636-149a176ee575-kube-api-access-glmq5\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.994526 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.994538 4751 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.994551 4751 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.994567 4751 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.301083 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" event={"ID":"9d2edd75-7066-43c1-9636-149a176ee575","Type":"ContainerDied","Data":"e2cb7bf89450e8f13b0ebe7b4c045c23a0242d2fc08f918636cc9c91032b6b5c"} Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.301505 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2cb7bf89450e8f13b0ebe7b4c045c23a0242d2fc08f918636cc9c91032b6b5c" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.301133 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.399613 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f"] Jan 30 21:56:26 crc kubenswrapper[4751]: E0130 21:56:26.400166 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2edd75-7066-43c1-9636-149a176ee575" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.400192 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2edd75-7066-43c1-9636-149a176ee575" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.400483 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d2edd75-7066-43c1-9636-149a176ee575" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.401549 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.405307 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.405444 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.405942 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.406407 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.407877 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.418043 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f"] Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.507478 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqtd7\" (UniqueName: \"kubernetes.io/projected/64c0e484-536b-4bf5-9f35-2bfc04b14133-kube-api-access-hqtd7\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.507762 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.507944 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.508207 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.508296 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.610784 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqtd7\" (UniqueName: \"kubernetes.io/projected/64c0e484-536b-4bf5-9f35-2bfc04b14133-kube-api-access-hqtd7\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.610856 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.610900 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.611021 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.611070 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.619135 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.619873 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.621129 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.621897 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.641506 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqtd7\" (UniqueName: \"kubernetes.io/projected/64c0e484-536b-4bf5-9f35-2bfc04b14133-kube-api-access-hqtd7\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.719772 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:27 crc kubenswrapper[4751]: I0130 21:56:27.293042 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f"] Jan 30 21:56:27 crc kubenswrapper[4751]: I0130 21:56:27.316718 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" event={"ID":"64c0e484-536b-4bf5-9f35-2bfc04b14133","Type":"ContainerStarted","Data":"9a026eb729a34d4b477952f2632ed15c11d6b22037f08e83aabf1428e03de6e2"} Jan 30 21:56:28 crc kubenswrapper[4751]: I0130 21:56:28.327244 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" event={"ID":"64c0e484-536b-4bf5-9f35-2bfc04b14133","Type":"ContainerStarted","Data":"e55a222a8cf3285625e24882cb0407f200684e42ff8e342e2e19480733bf455c"} Jan 30 21:56:28 crc kubenswrapper[4751]: I0130 21:56:28.356893 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" podStartSLOduration=1.933966357 podStartE2EDuration="2.35687434s" podCreationTimestamp="2026-01-30 21:56:26 +0000 UTC" firstStartedPulling="2026-01-30 21:56:27.305069974 +0000 UTC m=+2526.050892623" lastFinishedPulling="2026-01-30 21:56:27.727977947 +0000 UTC m=+2526.473800606" observedRunningTime="2026-01-30 21:56:28.348221134 +0000 UTC m=+2527.094043783" watchObservedRunningTime="2026-01-30 21:56:28.35687434 +0000 UTC m=+2527.102696989" Jan 30 21:58:24 crc kubenswrapper[4751]: I0130 21:58:24.126903 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:58:24 crc kubenswrapper[4751]: I0130 21:58:24.127526 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:58:54 crc kubenswrapper[4751]: I0130 21:58:54.126548 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:58:54 crc kubenswrapper[4751]: I0130 21:58:54.127050 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:59:24 crc kubenswrapper[4751]: I0130 21:59:24.126520 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:59:24 crc kubenswrapper[4751]: I0130 21:59:24.127128 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:59:24 crc kubenswrapper[4751]: I0130 21:59:24.127175 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:59:24 crc kubenswrapper[4751]: I0130 21:59:24.128259 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"72b97ad134c710248fa542e7f2b4ac03f85859885b9de5fb88903a4ed9925d19"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:59:24 crc kubenswrapper[4751]: I0130 21:59:24.128333 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://72b97ad134c710248fa542e7f2b4ac03f85859885b9de5fb88903a4ed9925d19" gracePeriod=600 Jan 30 21:59:24 crc kubenswrapper[4751]: I0130 21:59:24.695714 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="72b97ad134c710248fa542e7f2b4ac03f85859885b9de5fb88903a4ed9925d19" exitCode=0 Jan 30 21:59:24 crc kubenswrapper[4751]: I0130 21:59:24.695774 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"72b97ad134c710248fa542e7f2b4ac03f85859885b9de5fb88903a4ed9925d19"} Jan 30 21:59:24 crc kubenswrapper[4751]: I0130 21:59:24.696403 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0"} Jan 30 21:59:24 crc kubenswrapper[4751]: I0130 21:59:24.696432 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.145072 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6"] Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.148672 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.153840 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.154041 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.170121 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6"] Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.300174 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87fd180d-a717-4e4f-92fc-e8e77f2d303c-secret-volume\") pod \"collect-profiles-29496840-cv6z6\" (UID: \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.300419 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87fd180d-a717-4e4f-92fc-e8e77f2d303c-config-volume\") pod \"collect-profiles-29496840-cv6z6\" (UID: \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.300794 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpqc2\" (UniqueName: \"kubernetes.io/projected/87fd180d-a717-4e4f-92fc-e8e77f2d303c-kube-api-access-mpqc2\") pod \"collect-profiles-29496840-cv6z6\" (UID: \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.403752 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87fd180d-a717-4e4f-92fc-e8e77f2d303c-secret-volume\") pod \"collect-profiles-29496840-cv6z6\" (UID: \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.404215 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87fd180d-a717-4e4f-92fc-e8e77f2d303c-config-volume\") pod \"collect-profiles-29496840-cv6z6\" (UID: \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.404537 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpqc2\" (UniqueName: \"kubernetes.io/projected/87fd180d-a717-4e4f-92fc-e8e77f2d303c-kube-api-access-mpqc2\") pod \"collect-profiles-29496840-cv6z6\" (UID: \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.405139 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87fd180d-a717-4e4f-92fc-e8e77f2d303c-config-volume\") pod \"collect-profiles-29496840-cv6z6\" (UID: \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.411671 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87fd180d-a717-4e4f-92fc-e8e77f2d303c-secret-volume\") pod \"collect-profiles-29496840-cv6z6\" (UID: \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.420456 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpqc2\" (UniqueName: \"kubernetes.io/projected/87fd180d-a717-4e4f-92fc-e8e77f2d303c-kube-api-access-mpqc2\") pod \"collect-profiles-29496840-cv6z6\" (UID: \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.516193 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.989376 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6"] Jan 30 22:00:01 crc kubenswrapper[4751]: I0130 22:00:01.103719 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" event={"ID":"87fd180d-a717-4e4f-92fc-e8e77f2d303c","Type":"ContainerStarted","Data":"cd852f618c14d77a9840e0cfcbd5ee8f8500a03a169a9c644de32c3791b5d569"} Jan 30 22:00:02 crc kubenswrapper[4751]: I0130 22:00:02.116348 4751 generic.go:334] "Generic (PLEG): container finished" podID="87fd180d-a717-4e4f-92fc-e8e77f2d303c" containerID="8cb214ecc973d14bc0906a66a17ca4c95d3c39c0cada1250d2a736afa76d1aeb" exitCode=0 Jan 30 22:00:02 crc kubenswrapper[4751]: I0130 22:00:02.116426 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" event={"ID":"87fd180d-a717-4e4f-92fc-e8e77f2d303c","Type":"ContainerDied","Data":"8cb214ecc973d14bc0906a66a17ca4c95d3c39c0cada1250d2a736afa76d1aeb"} Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.582555 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mbgjr"] Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.585534 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.595952 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z7nt\" (UniqueName: \"kubernetes.io/projected/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-kube-api-access-8z7nt\") pod \"certified-operators-mbgjr\" (UID: \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\") " pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.596109 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-utilities\") pod \"certified-operators-mbgjr\" (UID: \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\") " pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.596185 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-catalog-content\") pod \"certified-operators-mbgjr\" (UID: \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\") " pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.609488 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mbgjr"] Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.637016 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.700158 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-catalog-content\") pod \"certified-operators-mbgjr\" (UID: \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\") " pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.700533 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z7nt\" (UniqueName: \"kubernetes.io/projected/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-kube-api-access-8z7nt\") pod \"certified-operators-mbgjr\" (UID: \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\") " pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.700810 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-utilities\") pod \"certified-operators-mbgjr\" (UID: \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\") " pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.701781 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-catalog-content\") pod \"certified-operators-mbgjr\" (UID: \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\") " pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.701890 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-utilities\") pod \"certified-operators-mbgjr\" (UID: \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\") " pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.728923 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z7nt\" (UniqueName: \"kubernetes.io/projected/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-kube-api-access-8z7nt\") pod \"certified-operators-mbgjr\" (UID: \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\") " pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.803421 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87fd180d-a717-4e4f-92fc-e8e77f2d303c-config-volume\") pod \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\" (UID: \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\") " Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.803666 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87fd180d-a717-4e4f-92fc-e8e77f2d303c-secret-volume\") pod \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\" (UID: \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\") " Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.803687 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpqc2\" (UniqueName: \"kubernetes.io/projected/87fd180d-a717-4e4f-92fc-e8e77f2d303c-kube-api-access-mpqc2\") pod \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\" (UID: \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\") " Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.805186 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87fd180d-a717-4e4f-92fc-e8e77f2d303c-config-volume" (OuterVolumeSpecName: "config-volume") pod "87fd180d-a717-4e4f-92fc-e8e77f2d303c" (UID: "87fd180d-a717-4e4f-92fc-e8e77f2d303c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.808432 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87fd180d-a717-4e4f-92fc-e8e77f2d303c-kube-api-access-mpqc2" (OuterVolumeSpecName: "kube-api-access-mpqc2") pod "87fd180d-a717-4e4f-92fc-e8e77f2d303c" (UID: "87fd180d-a717-4e4f-92fc-e8e77f2d303c"). InnerVolumeSpecName "kube-api-access-mpqc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.812700 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87fd180d-a717-4e4f-92fc-e8e77f2d303c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "87fd180d-a717-4e4f-92fc-e8e77f2d303c" (UID: "87fd180d-a717-4e4f-92fc-e8e77f2d303c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.907451 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87fd180d-a717-4e4f-92fc-e8e77f2d303c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.907493 4751 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87fd180d-a717-4e4f-92fc-e8e77f2d303c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.907509 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpqc2\" (UniqueName: \"kubernetes.io/projected/87fd180d-a717-4e4f-92fc-e8e77f2d303c-kube-api-access-mpqc2\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.948388 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:04 crc kubenswrapper[4751]: I0130 22:00:04.160446 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" event={"ID":"87fd180d-a717-4e4f-92fc-e8e77f2d303c","Type":"ContainerDied","Data":"cd852f618c14d77a9840e0cfcbd5ee8f8500a03a169a9c644de32c3791b5d569"} Jan 30 22:00:04 crc kubenswrapper[4751]: I0130 22:00:04.160492 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd852f618c14d77a9840e0cfcbd5ee8f8500a03a169a9c644de32c3791b5d569" Jan 30 22:00:04 crc kubenswrapper[4751]: I0130 22:00:04.160551 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" Jan 30 22:00:04 crc kubenswrapper[4751]: I0130 22:00:04.526774 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mbgjr"] Jan 30 22:00:04 crc kubenswrapper[4751]: I0130 22:00:04.718587 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p"] Jan 30 22:00:04 crc kubenswrapper[4751]: I0130 22:00:04.730584 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p"] Jan 30 22:00:05 crc kubenswrapper[4751]: I0130 22:00:05.173585 4751 generic.go:334] "Generic (PLEG): container finished" podID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" containerID="4dcaa711832a9bcefff451d85870a1e1c9f1f1df5c264b8880f8f7854b2f6a5e" exitCode=0 Jan 30 22:00:05 crc kubenswrapper[4751]: I0130 22:00:05.173645 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mbgjr" event={"ID":"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017","Type":"ContainerDied","Data":"4dcaa711832a9bcefff451d85870a1e1c9f1f1df5c264b8880f8f7854b2f6a5e"} Jan 30 22:00:05 crc kubenswrapper[4751]: I0130 22:00:05.173911 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mbgjr" event={"ID":"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017","Type":"ContainerStarted","Data":"2295f2e02e7ab4a280ce1245b46978ccac929c130b2e9ec0a55b1dbdfa326de1"} Jan 30 22:00:05 crc kubenswrapper[4751]: I0130 22:00:05.176446 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:00:05 crc kubenswrapper[4751]: I0130 22:00:05.992698 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc9ed63a-23a2-4b50-a290-0409ff14fd95" path="/var/lib/kubelet/pods/cc9ed63a-23a2-4b50-a290-0409ff14fd95/volumes" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.012644 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h4rng"] Jan 30 22:00:08 crc kubenswrapper[4751]: E0130 22:00:08.013493 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87fd180d-a717-4e4f-92fc-e8e77f2d303c" containerName="collect-profiles" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.013507 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="87fd180d-a717-4e4f-92fc-e8e77f2d303c" containerName="collect-profiles" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.013799 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="87fd180d-a717-4e4f-92fc-e8e77f2d303c" containerName="collect-profiles" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.015744 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.031339 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h4rng"] Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.136073 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f296323-78aa-4bcb-8418-898e0d7b775e-utilities\") pod \"redhat-operators-h4rng\" (UID: \"6f296323-78aa-4bcb-8418-898e0d7b775e\") " pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.136765 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f296323-78aa-4bcb-8418-898e0d7b775e-catalog-content\") pod \"redhat-operators-h4rng\" (UID: \"6f296323-78aa-4bcb-8418-898e0d7b775e\") " pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.137195 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kfwq\" (UniqueName: \"kubernetes.io/projected/6f296323-78aa-4bcb-8418-898e0d7b775e-kube-api-access-2kfwq\") pod \"redhat-operators-h4rng\" (UID: \"6f296323-78aa-4bcb-8418-898e0d7b775e\") " pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.240167 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f296323-78aa-4bcb-8418-898e0d7b775e-catalog-content\") pod \"redhat-operators-h4rng\" (UID: \"6f296323-78aa-4bcb-8418-898e0d7b775e\") " pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.240314 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kfwq\" (UniqueName: \"kubernetes.io/projected/6f296323-78aa-4bcb-8418-898e0d7b775e-kube-api-access-2kfwq\") pod \"redhat-operators-h4rng\" (UID: \"6f296323-78aa-4bcb-8418-898e0d7b775e\") " pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.240495 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f296323-78aa-4bcb-8418-898e0d7b775e-utilities\") pod \"redhat-operators-h4rng\" (UID: \"6f296323-78aa-4bcb-8418-898e0d7b775e\") " pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.241107 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f296323-78aa-4bcb-8418-898e0d7b775e-catalog-content\") pod \"redhat-operators-h4rng\" (UID: \"6f296323-78aa-4bcb-8418-898e0d7b775e\") " pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.241119 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f296323-78aa-4bcb-8418-898e0d7b775e-utilities\") pod \"redhat-operators-h4rng\" (UID: \"6f296323-78aa-4bcb-8418-898e0d7b775e\") " pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.267446 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kfwq\" (UniqueName: \"kubernetes.io/projected/6f296323-78aa-4bcb-8418-898e0d7b775e-kube-api-access-2kfwq\") pod \"redhat-operators-h4rng\" (UID: \"6f296323-78aa-4bcb-8418-898e0d7b775e\") " pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.349344 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.889556 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h4rng"] Jan 30 22:00:09 crc kubenswrapper[4751]: I0130 22:00:09.217422 4751 generic.go:334] "Generic (PLEG): container finished" podID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerID="3c8497c3258c93d66fb519273460d98690093b33a493d7378fb7c4a7c131f454" exitCode=0 Jan 30 22:00:09 crc kubenswrapper[4751]: I0130 22:00:09.217528 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4rng" event={"ID":"6f296323-78aa-4bcb-8418-898e0d7b775e","Type":"ContainerDied","Data":"3c8497c3258c93d66fb519273460d98690093b33a493d7378fb7c4a7c131f454"} Jan 30 22:00:09 crc kubenswrapper[4751]: I0130 22:00:09.217965 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4rng" event={"ID":"6f296323-78aa-4bcb-8418-898e0d7b775e","Type":"ContainerStarted","Data":"ea88b11c8501c2661a48b1f2780fa528f995da90eb3c23827d58fc3d93966644"} Jan 30 22:00:09 crc kubenswrapper[4751]: I0130 22:00:09.221587 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mbgjr" event={"ID":"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017","Type":"ContainerStarted","Data":"145648500fb4fe24047f9789895bde02ee47ed1fcc6d67993ff7dc9ab1a1638c"} Jan 30 22:00:11 crc kubenswrapper[4751]: I0130 22:00:11.244622 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4rng" event={"ID":"6f296323-78aa-4bcb-8418-898e0d7b775e","Type":"ContainerStarted","Data":"bf63f76e3da4f87b298f0312635796b3cf88cbcb082c7fb3553ac3bedb997923"} Jan 30 22:00:13 crc kubenswrapper[4751]: I0130 22:00:13.269834 4751 generic.go:334] "Generic (PLEG): container finished" podID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" containerID="145648500fb4fe24047f9789895bde02ee47ed1fcc6d67993ff7dc9ab1a1638c" exitCode=0 Jan 30 22:00:13 crc kubenswrapper[4751]: I0130 22:00:13.269899 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mbgjr" event={"ID":"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017","Type":"ContainerDied","Data":"145648500fb4fe24047f9789895bde02ee47ed1fcc6d67993ff7dc9ab1a1638c"} Jan 30 22:00:15 crc kubenswrapper[4751]: I0130 22:00:15.307538 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mbgjr" event={"ID":"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017","Type":"ContainerStarted","Data":"ff119fbbce42b05dc2414135ad63839e511e523a3e25e47a1bb53a0ee43eb1ea"} Jan 30 22:00:15 crc kubenswrapper[4751]: I0130 22:00:15.330808 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mbgjr" podStartSLOduration=3.139745247 podStartE2EDuration="12.330788062s" podCreationTimestamp="2026-01-30 22:00:03 +0000 UTC" firstStartedPulling="2026-01-30 22:00:05.176217045 +0000 UTC m=+2743.922039694" lastFinishedPulling="2026-01-30 22:00:14.36725986 +0000 UTC m=+2753.113082509" observedRunningTime="2026-01-30 22:00:15.324732426 +0000 UTC m=+2754.070555085" watchObservedRunningTime="2026-01-30 22:00:15.330788062 +0000 UTC m=+2754.076610711" Jan 30 22:00:23 crc kubenswrapper[4751]: I0130 22:00:23.385859 4751 generic.go:334] "Generic (PLEG): container finished" podID="64c0e484-536b-4bf5-9f35-2bfc04b14133" containerID="e55a222a8cf3285625e24882cb0407f200684e42ff8e342e2e19480733bf455c" exitCode=0 Jan 30 22:00:23 crc kubenswrapper[4751]: I0130 22:00:23.385935 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" event={"ID":"64c0e484-536b-4bf5-9f35-2bfc04b14133","Type":"ContainerDied","Data":"e55a222a8cf3285625e24882cb0407f200684e42ff8e342e2e19480733bf455c"} Jan 30 22:00:23 crc kubenswrapper[4751]: I0130 22:00:23.949527 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:23 crc kubenswrapper[4751]: I0130 22:00:23.949781 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:24 crc kubenswrapper[4751]: I0130 22:00:24.901219 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.001316 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-libvirt-secret-0\") pod \"64c0e484-536b-4bf5-9f35-2bfc04b14133\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.001489 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqtd7\" (UniqueName: \"kubernetes.io/projected/64c0e484-536b-4bf5-9f35-2bfc04b14133-kube-api-access-hqtd7\") pod \"64c0e484-536b-4bf5-9f35-2bfc04b14133\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.001661 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-ssh-key-openstack-edpm-ipam\") pod \"64c0e484-536b-4bf5-9f35-2bfc04b14133\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.001705 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-inventory\") pod \"64c0e484-536b-4bf5-9f35-2bfc04b14133\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.001769 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-libvirt-combined-ca-bundle\") pod \"64c0e484-536b-4bf5-9f35-2bfc04b14133\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.006963 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-mbgjr" podUID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" containerName="registry-server" probeResult="failure" output=< Jan 30 22:00:25 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:00:25 crc kubenswrapper[4751]: > Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.009858 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "64c0e484-536b-4bf5-9f35-2bfc04b14133" (UID: "64c0e484-536b-4bf5-9f35-2bfc04b14133"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.015782 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64c0e484-536b-4bf5-9f35-2bfc04b14133-kube-api-access-hqtd7" (OuterVolumeSpecName: "kube-api-access-hqtd7") pod "64c0e484-536b-4bf5-9f35-2bfc04b14133" (UID: "64c0e484-536b-4bf5-9f35-2bfc04b14133"). InnerVolumeSpecName "kube-api-access-hqtd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.036993 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "64c0e484-536b-4bf5-9f35-2bfc04b14133" (UID: "64c0e484-536b-4bf5-9f35-2bfc04b14133"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.041285 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "64c0e484-536b-4bf5-9f35-2bfc04b14133" (UID: "64c0e484-536b-4bf5-9f35-2bfc04b14133"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.043795 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-inventory" (OuterVolumeSpecName: "inventory") pod "64c0e484-536b-4bf5-9f35-2bfc04b14133" (UID: "64c0e484-536b-4bf5-9f35-2bfc04b14133"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.105260 4751 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.105297 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqtd7\" (UniqueName: \"kubernetes.io/projected/64c0e484-536b-4bf5-9f35-2bfc04b14133-kube-api-access-hqtd7\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.105367 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.105382 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.105393 4751 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.410011 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" event={"ID":"64c0e484-536b-4bf5-9f35-2bfc04b14133","Type":"ContainerDied","Data":"9a026eb729a34d4b477952f2632ed15c11d6b22037f08e83aabf1428e03de6e2"} Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.410432 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a026eb729a34d4b477952f2632ed15c11d6b22037f08e83aabf1428e03de6e2" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.410061 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.545318 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv"] Jan 30 22:00:25 crc kubenswrapper[4751]: E0130 22:00:25.549997 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c0e484-536b-4bf5-9f35-2bfc04b14133" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.550021 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c0e484-536b-4bf5-9f35-2bfc04b14133" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.550277 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="64c0e484-536b-4bf5-9f35-2bfc04b14133" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.561144 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.567815 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.567923 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.568026 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.568052 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.568112 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.568202 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.573651 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv"] Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.575752 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.722823 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.722906 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.722969 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.723029 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.723048 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7165caae-e471-463b-9f66-be7fb4c7c463-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.723107 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.723124 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbpmt\" (UniqueName: \"kubernetes.io/projected/7165caae-e471-463b-9f66-be7fb4c7c463-kube-api-access-cbpmt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.723149 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.723191 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.825574 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.825799 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbpmt\" (UniqueName: \"kubernetes.io/projected/7165caae-e471-463b-9f66-be7fb4c7c463-kube-api-access-cbpmt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.825835 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.825890 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.825934 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.825999 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.826071 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.826149 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.826172 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7165caae-e471-463b-9f66-be7fb4c7c463-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.827036 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7165caae-e471-463b-9f66-be7fb4c7c463-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.831549 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.831809 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.833855 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.835945 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.836000 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.836243 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.837136 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.850566 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbpmt\" (UniqueName: \"kubernetes.io/projected/7165caae-e471-463b-9f66-be7fb4c7c463-kube-api-access-cbpmt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.886829 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:26 crc kubenswrapper[4751]: W0130 22:00:26.482541 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7165caae_e471_463b_9f66_be7fb4c7c463.slice/crio-c6452be326750e2048619c44dd9a07b67cfa00b1469a8de0a0fe91a76b4bd302 WatchSource:0}: Error finding container c6452be326750e2048619c44dd9a07b67cfa00b1469a8de0a0fe91a76b4bd302: Status 404 returned error can't find the container with id c6452be326750e2048619c44dd9a07b67cfa00b1469a8de0a0fe91a76b4bd302 Jan 30 22:00:26 crc kubenswrapper[4751]: I0130 22:00:26.483364 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv"] Jan 30 22:00:27 crc kubenswrapper[4751]: I0130 22:00:27.434112 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" event={"ID":"7165caae-e471-463b-9f66-be7fb4c7c463","Type":"ContainerStarted","Data":"c6452be326750e2048619c44dd9a07b67cfa00b1469a8de0a0fe91a76b4bd302"} Jan 30 22:00:27 crc kubenswrapper[4751]: I0130 22:00:27.437817 4751 generic.go:334] "Generic (PLEG): container finished" podID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerID="bf63f76e3da4f87b298f0312635796b3cf88cbcb082c7fb3553ac3bedb997923" exitCode=0 Jan 30 22:00:27 crc kubenswrapper[4751]: I0130 22:00:27.437883 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4rng" event={"ID":"6f296323-78aa-4bcb-8418-898e0d7b775e","Type":"ContainerDied","Data":"bf63f76e3da4f87b298f0312635796b3cf88cbcb082c7fb3553ac3bedb997923"} Jan 30 22:00:27 crc kubenswrapper[4751]: I0130 22:00:27.568065 4751 scope.go:117] "RemoveContainer" containerID="e542d53fa8d38b44c5415e62c079644bf8fb944ad32fd45452254fbadf2caa51" Jan 30 22:00:28 crc kubenswrapper[4751]: I0130 22:00:28.449502 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" event={"ID":"7165caae-e471-463b-9f66-be7fb4c7c463","Type":"ContainerStarted","Data":"4286082d44313eb3c72a73df8ca40afdcf8a7a69e10559ba7896eeef94d61e37"} Jan 30 22:00:28 crc kubenswrapper[4751]: I0130 22:00:28.452455 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4rng" event={"ID":"6f296323-78aa-4bcb-8418-898e0d7b775e","Type":"ContainerStarted","Data":"a08661deaa76429daed052d597f575ca986586b19c46b93459c2974d26d44343"} Jan 30 22:00:28 crc kubenswrapper[4751]: I0130 22:00:28.463938 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" podStartSLOduration=2.147722824 podStartE2EDuration="3.463917619s" podCreationTimestamp="2026-01-30 22:00:25 +0000 UTC" firstStartedPulling="2026-01-30 22:00:26.484626175 +0000 UTC m=+2765.230448824" lastFinishedPulling="2026-01-30 22:00:27.80082097 +0000 UTC m=+2766.546643619" observedRunningTime="2026-01-30 22:00:28.463167878 +0000 UTC m=+2767.208990547" watchObservedRunningTime="2026-01-30 22:00:28.463917619 +0000 UTC m=+2767.209740258" Jan 30 22:00:28 crc kubenswrapper[4751]: I0130 22:00:28.484590 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h4rng" podStartSLOduration=2.7103253 podStartE2EDuration="21.484570964s" podCreationTimestamp="2026-01-30 22:00:07 +0000 UTC" firstStartedPulling="2026-01-30 22:00:09.219365227 +0000 UTC m=+2747.965187876" lastFinishedPulling="2026-01-30 22:00:27.993610891 +0000 UTC m=+2766.739433540" observedRunningTime="2026-01-30 22:00:28.479115914 +0000 UTC m=+2767.224938563" watchObservedRunningTime="2026-01-30 22:00:28.484570964 +0000 UTC m=+2767.230393613" Jan 30 22:00:35 crc kubenswrapper[4751]: I0130 22:00:35.009997 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-mbgjr" podUID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" containerName="registry-server" probeResult="failure" output=< Jan 30 22:00:35 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:00:35 crc kubenswrapper[4751]: > Jan 30 22:00:38 crc kubenswrapper[4751]: I0130 22:00:38.350950 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:00:38 crc kubenswrapper[4751]: I0130 22:00:38.351579 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:00:39 crc kubenswrapper[4751]: I0130 22:00:39.397208 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h4rng" podUID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerName="registry-server" probeResult="failure" output=< Jan 30 22:00:39 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:00:39 crc kubenswrapper[4751]: > Jan 30 22:00:43 crc kubenswrapper[4751]: I0130 22:00:43.996219 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:44 crc kubenswrapper[4751]: I0130 22:00:44.051369 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:44 crc kubenswrapper[4751]: I0130 22:00:44.237185 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mbgjr"] Jan 30 22:00:45 crc kubenswrapper[4751]: I0130 22:00:45.763387 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mbgjr" podUID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" containerName="registry-server" containerID="cri-o://ff119fbbce42b05dc2414135ad63839e511e523a3e25e47a1bb53a0ee43eb1ea" gracePeriod=2 Jan 30 22:00:46 crc kubenswrapper[4751]: I0130 22:00:46.780000 4751 generic.go:334] "Generic (PLEG): container finished" podID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" containerID="ff119fbbce42b05dc2414135ad63839e511e523a3e25e47a1bb53a0ee43eb1ea" exitCode=0 Jan 30 22:00:46 crc kubenswrapper[4751]: I0130 22:00:46.780067 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mbgjr" event={"ID":"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017","Type":"ContainerDied","Data":"ff119fbbce42b05dc2414135ad63839e511e523a3e25e47a1bb53a0ee43eb1ea"} Jan 30 22:00:46 crc kubenswrapper[4751]: I0130 22:00:46.780522 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mbgjr" event={"ID":"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017","Type":"ContainerDied","Data":"2295f2e02e7ab4a280ce1245b46978ccac929c130b2e9ec0a55b1dbdfa326de1"} Jan 30 22:00:46 crc kubenswrapper[4751]: I0130 22:00:46.780540 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2295f2e02e7ab4a280ce1245b46978ccac929c130b2e9ec0a55b1dbdfa326de1" Jan 30 22:00:46 crc kubenswrapper[4751]: I0130 22:00:46.879458 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:46 crc kubenswrapper[4751]: I0130 22:00:46.941758 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-utilities\") pod \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\" (UID: \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\") " Jan 30 22:00:46 crc kubenswrapper[4751]: I0130 22:00:46.941850 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z7nt\" (UniqueName: \"kubernetes.io/projected/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-kube-api-access-8z7nt\") pod \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\" (UID: \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\") " Jan 30 22:00:46 crc kubenswrapper[4751]: I0130 22:00:46.941890 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-catalog-content\") pod \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\" (UID: \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\") " Jan 30 22:00:46 crc kubenswrapper[4751]: I0130 22:00:46.942868 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-utilities" (OuterVolumeSpecName: "utilities") pod "ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" (UID: "ddeaf5f6-2b13-47e2-a99e-9f23d7a84017"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:00:46 crc kubenswrapper[4751]: I0130 22:00:46.943061 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:46 crc kubenswrapper[4751]: I0130 22:00:46.947810 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-kube-api-access-8z7nt" (OuterVolumeSpecName: "kube-api-access-8z7nt") pod "ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" (UID: "ddeaf5f6-2b13-47e2-a99e-9f23d7a84017"). InnerVolumeSpecName "kube-api-access-8z7nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:00:47 crc kubenswrapper[4751]: I0130 22:00:47.003449 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" (UID: "ddeaf5f6-2b13-47e2-a99e-9f23d7a84017"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:00:47 crc kubenswrapper[4751]: I0130 22:00:47.045486 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z7nt\" (UniqueName: \"kubernetes.io/projected/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-kube-api-access-8z7nt\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:47 crc kubenswrapper[4751]: I0130 22:00:47.045524 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:47 crc kubenswrapper[4751]: I0130 22:00:47.796097 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:47 crc kubenswrapper[4751]: I0130 22:00:47.835903 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mbgjr"] Jan 30 22:00:47 crc kubenswrapper[4751]: I0130 22:00:47.846227 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mbgjr"] Jan 30 22:00:47 crc kubenswrapper[4751]: I0130 22:00:47.989571 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" path="/var/lib/kubelet/pods/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017/volumes" Jan 30 22:00:49 crc kubenswrapper[4751]: I0130 22:00:49.396185 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h4rng" podUID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerName="registry-server" probeResult="failure" output=< Jan 30 22:00:49 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:00:49 crc kubenswrapper[4751]: > Jan 30 22:00:59 crc kubenswrapper[4751]: I0130 22:00:59.398262 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h4rng" podUID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerName="registry-server" probeResult="failure" output=< Jan 30 22:00:59 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:00:59 crc kubenswrapper[4751]: > Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.173454 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29496841-qnsrj"] Jan 30 22:01:00 crc kubenswrapper[4751]: E0130 22:01:00.174035 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" containerName="extract-content" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.174049 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" containerName="extract-content" Jan 30 22:01:00 crc kubenswrapper[4751]: E0130 22:01:00.174067 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" containerName="extract-utilities" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.174073 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" containerName="extract-utilities" Jan 30 22:01:00 crc kubenswrapper[4751]: E0130 22:01:00.174089 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" containerName="registry-server" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.174094 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" containerName="registry-server" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.174388 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" containerName="registry-server" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.175214 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.211839 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496841-qnsrj"] Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.274829 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-config-data\") pod \"keystone-cron-29496841-qnsrj\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.274930 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-combined-ca-bundle\") pod \"keystone-cron-29496841-qnsrj\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.275114 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-fernet-keys\") pod \"keystone-cron-29496841-qnsrj\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.275151 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmsl9\" (UniqueName: \"kubernetes.io/projected/ec292c3e-470e-4f61-92e9-4e2c8098f879-kube-api-access-nmsl9\") pod \"keystone-cron-29496841-qnsrj\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.376721 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-combined-ca-bundle\") pod \"keystone-cron-29496841-qnsrj\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.376915 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-fernet-keys\") pod \"keystone-cron-29496841-qnsrj\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.376951 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmsl9\" (UniqueName: \"kubernetes.io/projected/ec292c3e-470e-4f61-92e9-4e2c8098f879-kube-api-access-nmsl9\") pod \"keystone-cron-29496841-qnsrj\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.377043 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-config-data\") pod \"keystone-cron-29496841-qnsrj\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.393411 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-config-data\") pod \"keystone-cron-29496841-qnsrj\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.393411 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-fernet-keys\") pod \"keystone-cron-29496841-qnsrj\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.393468 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmsl9\" (UniqueName: \"kubernetes.io/projected/ec292c3e-470e-4f61-92e9-4e2c8098f879-kube-api-access-nmsl9\") pod \"keystone-cron-29496841-qnsrj\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.393411 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-combined-ca-bundle\") pod \"keystone-cron-29496841-qnsrj\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.510103 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:01 crc kubenswrapper[4751]: I0130 22:01:01.090365 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496841-qnsrj"] Jan 30 22:01:01 crc kubenswrapper[4751]: W0130 22:01:01.098110 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec292c3e_470e_4f61_92e9_4e2c8098f879.slice/crio-a41e99fabcd417d94a957674e0615513fa1a98fd6536bf33a451c8f9a839fdb0 WatchSource:0}: Error finding container a41e99fabcd417d94a957674e0615513fa1a98fd6536bf33a451c8f9a839fdb0: Status 404 returned error can't find the container with id a41e99fabcd417d94a957674e0615513fa1a98fd6536bf33a451c8f9a839fdb0 Jan 30 22:01:01 crc kubenswrapper[4751]: I0130 22:01:01.958059 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496841-qnsrj" event={"ID":"ec292c3e-470e-4f61-92e9-4e2c8098f879","Type":"ContainerStarted","Data":"b88daa692ab905bb5eba548f715cb26848d06e66af1868ac78813ab3b7cb8c31"} Jan 30 22:01:01 crc kubenswrapper[4751]: I0130 22:01:01.958471 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496841-qnsrj" event={"ID":"ec292c3e-470e-4f61-92e9-4e2c8098f879","Type":"ContainerStarted","Data":"a41e99fabcd417d94a957674e0615513fa1a98fd6536bf33a451c8f9a839fdb0"} Jan 30 22:01:01 crc kubenswrapper[4751]: I0130 22:01:01.976724 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29496841-qnsrj" podStartSLOduration=1.976706209 podStartE2EDuration="1.976706209s" podCreationTimestamp="2026-01-30 22:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:01:01.973994156 +0000 UTC m=+2800.719816805" watchObservedRunningTime="2026-01-30 22:01:01.976706209 +0000 UTC m=+2800.722528858" Jan 30 22:01:05 crc kubenswrapper[4751]: I0130 22:01:05.054025 4751 generic.go:334] "Generic (PLEG): container finished" podID="ec292c3e-470e-4f61-92e9-4e2c8098f879" containerID="b88daa692ab905bb5eba548f715cb26848d06e66af1868ac78813ab3b7cb8c31" exitCode=0 Jan 30 22:01:05 crc kubenswrapper[4751]: I0130 22:01:05.054119 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496841-qnsrj" event={"ID":"ec292c3e-470e-4f61-92e9-4e2c8098f879","Type":"ContainerDied","Data":"b88daa692ab905bb5eba548f715cb26848d06e66af1868ac78813ab3b7cb8c31"} Jan 30 22:01:06 crc kubenswrapper[4751]: I0130 22:01:06.478391 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:06 crc kubenswrapper[4751]: I0130 22:01:06.647616 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-config-data\") pod \"ec292c3e-470e-4f61-92e9-4e2c8098f879\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " Jan 30 22:01:06 crc kubenswrapper[4751]: I0130 22:01:06.647796 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-combined-ca-bundle\") pod \"ec292c3e-470e-4f61-92e9-4e2c8098f879\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " Jan 30 22:01:06 crc kubenswrapper[4751]: I0130 22:01:06.648008 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmsl9\" (UniqueName: \"kubernetes.io/projected/ec292c3e-470e-4f61-92e9-4e2c8098f879-kube-api-access-nmsl9\") pod \"ec292c3e-470e-4f61-92e9-4e2c8098f879\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " Jan 30 22:01:06 crc kubenswrapper[4751]: I0130 22:01:06.648140 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-fernet-keys\") pod \"ec292c3e-470e-4f61-92e9-4e2c8098f879\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " Jan 30 22:01:06 crc kubenswrapper[4751]: I0130 22:01:06.654650 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec292c3e-470e-4f61-92e9-4e2c8098f879-kube-api-access-nmsl9" (OuterVolumeSpecName: "kube-api-access-nmsl9") pod "ec292c3e-470e-4f61-92e9-4e2c8098f879" (UID: "ec292c3e-470e-4f61-92e9-4e2c8098f879"). InnerVolumeSpecName "kube-api-access-nmsl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:06 crc kubenswrapper[4751]: I0130 22:01:06.654793 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ec292c3e-470e-4f61-92e9-4e2c8098f879" (UID: "ec292c3e-470e-4f61-92e9-4e2c8098f879"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:01:06 crc kubenswrapper[4751]: I0130 22:01:06.688268 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec292c3e-470e-4f61-92e9-4e2c8098f879" (UID: "ec292c3e-470e-4f61-92e9-4e2c8098f879"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:01:06 crc kubenswrapper[4751]: I0130 22:01:06.727221 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-config-data" (OuterVolumeSpecName: "config-data") pod "ec292c3e-470e-4f61-92e9-4e2c8098f879" (UID: "ec292c3e-470e-4f61-92e9-4e2c8098f879"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:01:06 crc kubenswrapper[4751]: I0130 22:01:06.751609 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:06 crc kubenswrapper[4751]: I0130 22:01:06.751651 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:06 crc kubenswrapper[4751]: I0130 22:01:06.751668 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmsl9\" (UniqueName: \"kubernetes.io/projected/ec292c3e-470e-4f61-92e9-4e2c8098f879-kube-api-access-nmsl9\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:06 crc kubenswrapper[4751]: I0130 22:01:06.751682 4751 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:07 crc kubenswrapper[4751]: I0130 22:01:07.076760 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496841-qnsrj" event={"ID":"ec292c3e-470e-4f61-92e9-4e2c8098f879","Type":"ContainerDied","Data":"a41e99fabcd417d94a957674e0615513fa1a98fd6536bf33a451c8f9a839fdb0"} Jan 30 22:01:07 crc kubenswrapper[4751]: I0130 22:01:07.076990 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a41e99fabcd417d94a957674e0615513fa1a98fd6536bf33a451c8f9a839fdb0" Jan 30 22:01:07 crc kubenswrapper[4751]: I0130 22:01:07.076821 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:08 crc kubenswrapper[4751]: I0130 22:01:08.410172 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:01:08 crc kubenswrapper[4751]: I0130 22:01:08.468125 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:01:09 crc kubenswrapper[4751]: I0130 22:01:09.162848 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h4rng"] Jan 30 22:01:10 crc kubenswrapper[4751]: I0130 22:01:10.101405 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h4rng" podUID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerName="registry-server" containerID="cri-o://a08661deaa76429daed052d597f575ca986586b19c46b93459c2974d26d44343" gracePeriod=2 Jan 30 22:01:10 crc kubenswrapper[4751]: I0130 22:01:10.796072 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:01:10 crc kubenswrapper[4751]: I0130 22:01:10.964697 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f296323-78aa-4bcb-8418-898e0d7b775e-utilities\") pod \"6f296323-78aa-4bcb-8418-898e0d7b775e\" (UID: \"6f296323-78aa-4bcb-8418-898e0d7b775e\") " Jan 30 22:01:10 crc kubenswrapper[4751]: I0130 22:01:10.965016 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f296323-78aa-4bcb-8418-898e0d7b775e-catalog-content\") pod \"6f296323-78aa-4bcb-8418-898e0d7b775e\" (UID: \"6f296323-78aa-4bcb-8418-898e0d7b775e\") " Jan 30 22:01:10 crc kubenswrapper[4751]: I0130 22:01:10.965097 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kfwq\" (UniqueName: \"kubernetes.io/projected/6f296323-78aa-4bcb-8418-898e0d7b775e-kube-api-access-2kfwq\") pod \"6f296323-78aa-4bcb-8418-898e0d7b775e\" (UID: \"6f296323-78aa-4bcb-8418-898e0d7b775e\") " Jan 30 22:01:10 crc kubenswrapper[4751]: I0130 22:01:10.965540 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f296323-78aa-4bcb-8418-898e0d7b775e-utilities" (OuterVolumeSpecName: "utilities") pod "6f296323-78aa-4bcb-8418-898e0d7b775e" (UID: "6f296323-78aa-4bcb-8418-898e0d7b775e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:01:10 crc kubenswrapper[4751]: I0130 22:01:10.966045 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f296323-78aa-4bcb-8418-898e0d7b775e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:10 crc kubenswrapper[4751]: I0130 22:01:10.971618 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f296323-78aa-4bcb-8418-898e0d7b775e-kube-api-access-2kfwq" (OuterVolumeSpecName: "kube-api-access-2kfwq") pod "6f296323-78aa-4bcb-8418-898e0d7b775e" (UID: "6f296323-78aa-4bcb-8418-898e0d7b775e"). InnerVolumeSpecName "kube-api-access-2kfwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.074685 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kfwq\" (UniqueName: \"kubernetes.io/projected/6f296323-78aa-4bcb-8418-898e0d7b775e-kube-api-access-2kfwq\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.092857 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f296323-78aa-4bcb-8418-898e0d7b775e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f296323-78aa-4bcb-8418-898e0d7b775e" (UID: "6f296323-78aa-4bcb-8418-898e0d7b775e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.112754 4751 generic.go:334] "Generic (PLEG): container finished" podID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerID="a08661deaa76429daed052d597f575ca986586b19c46b93459c2974d26d44343" exitCode=0 Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.112797 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4rng" event={"ID":"6f296323-78aa-4bcb-8418-898e0d7b775e","Type":"ContainerDied","Data":"a08661deaa76429daed052d597f575ca986586b19c46b93459c2974d26d44343"} Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.112823 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4rng" event={"ID":"6f296323-78aa-4bcb-8418-898e0d7b775e","Type":"ContainerDied","Data":"ea88b11c8501c2661a48b1f2780fa528f995da90eb3c23827d58fc3d93966644"} Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.112839 4751 scope.go:117] "RemoveContainer" containerID="a08661deaa76429daed052d597f575ca986586b19c46b93459c2974d26d44343" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.112960 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.144828 4751 scope.go:117] "RemoveContainer" containerID="bf63f76e3da4f87b298f0312635796b3cf88cbcb082c7fb3553ac3bedb997923" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.152518 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h4rng"] Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.162976 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h4rng"] Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.177839 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f296323-78aa-4bcb-8418-898e0d7b775e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.180698 4751 scope.go:117] "RemoveContainer" containerID="3c8497c3258c93d66fb519273460d98690093b33a493d7378fb7c4a7c131f454" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.226233 4751 scope.go:117] "RemoveContainer" containerID="a08661deaa76429daed052d597f575ca986586b19c46b93459c2974d26d44343" Jan 30 22:01:11 crc kubenswrapper[4751]: E0130 22:01:11.226714 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a08661deaa76429daed052d597f575ca986586b19c46b93459c2974d26d44343\": container with ID starting with a08661deaa76429daed052d597f575ca986586b19c46b93459c2974d26d44343 not found: ID does not exist" containerID="a08661deaa76429daed052d597f575ca986586b19c46b93459c2974d26d44343" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.226748 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a08661deaa76429daed052d597f575ca986586b19c46b93459c2974d26d44343"} err="failed to get container status \"a08661deaa76429daed052d597f575ca986586b19c46b93459c2974d26d44343\": rpc error: code = NotFound desc = could not find container \"a08661deaa76429daed052d597f575ca986586b19c46b93459c2974d26d44343\": container with ID starting with a08661deaa76429daed052d597f575ca986586b19c46b93459c2974d26d44343 not found: ID does not exist" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.226770 4751 scope.go:117] "RemoveContainer" containerID="bf63f76e3da4f87b298f0312635796b3cf88cbcb082c7fb3553ac3bedb997923" Jan 30 22:01:11 crc kubenswrapper[4751]: E0130 22:01:11.227061 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf63f76e3da4f87b298f0312635796b3cf88cbcb082c7fb3553ac3bedb997923\": container with ID starting with bf63f76e3da4f87b298f0312635796b3cf88cbcb082c7fb3553ac3bedb997923 not found: ID does not exist" containerID="bf63f76e3da4f87b298f0312635796b3cf88cbcb082c7fb3553ac3bedb997923" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.227186 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf63f76e3da4f87b298f0312635796b3cf88cbcb082c7fb3553ac3bedb997923"} err="failed to get container status \"bf63f76e3da4f87b298f0312635796b3cf88cbcb082c7fb3553ac3bedb997923\": rpc error: code = NotFound desc = could not find container \"bf63f76e3da4f87b298f0312635796b3cf88cbcb082c7fb3553ac3bedb997923\": container with ID starting with bf63f76e3da4f87b298f0312635796b3cf88cbcb082c7fb3553ac3bedb997923 not found: ID does not exist" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.227296 4751 scope.go:117] "RemoveContainer" containerID="3c8497c3258c93d66fb519273460d98690093b33a493d7378fb7c4a7c131f454" Jan 30 22:01:11 crc kubenswrapper[4751]: E0130 22:01:11.227664 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c8497c3258c93d66fb519273460d98690093b33a493d7378fb7c4a7c131f454\": container with ID starting with 3c8497c3258c93d66fb519273460d98690093b33a493d7378fb7c4a7c131f454 not found: ID does not exist" containerID="3c8497c3258c93d66fb519273460d98690093b33a493d7378fb7c4a7c131f454" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.227696 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c8497c3258c93d66fb519273460d98690093b33a493d7378fb7c4a7c131f454"} err="failed to get container status \"3c8497c3258c93d66fb519273460d98690093b33a493d7378fb7c4a7c131f454\": rpc error: code = NotFound desc = could not find container \"3c8497c3258c93d66fb519273460d98690093b33a493d7378fb7c4a7c131f454\": container with ID starting with 3c8497c3258c93d66fb519273460d98690093b33a493d7378fb7c4a7c131f454 not found: ID does not exist" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.996127 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f296323-78aa-4bcb-8418-898e0d7b775e" path="/var/lib/kubelet/pods/6f296323-78aa-4bcb-8418-898e0d7b775e/volumes" Jan 30 22:01:24 crc kubenswrapper[4751]: I0130 22:01:24.126592 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:01:24 crc kubenswrapper[4751]: I0130 22:01:24.127182 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:01:54 crc kubenswrapper[4751]: I0130 22:01:54.126791 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:01:54 crc kubenswrapper[4751]: I0130 22:01:54.127440 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.337412 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8c8mn"] Jan 30 22:02:06 crc kubenswrapper[4751]: E0130 22:02:06.339053 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerName="extract-content" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.339078 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerName="extract-content" Jan 30 22:02:06 crc kubenswrapper[4751]: E0130 22:02:06.339108 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerName="registry-server" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.339117 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerName="registry-server" Jan 30 22:02:06 crc kubenswrapper[4751]: E0130 22:02:06.339149 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerName="extract-utilities" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.339158 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerName="extract-utilities" Jan 30 22:02:06 crc kubenswrapper[4751]: E0130 22:02:06.339176 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec292c3e-470e-4f61-92e9-4e2c8098f879" containerName="keystone-cron" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.339184 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec292c3e-470e-4f61-92e9-4e2c8098f879" containerName="keystone-cron" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.339457 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec292c3e-470e-4f61-92e9-4e2c8098f879" containerName="keystone-cron" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.339495 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerName="registry-server" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.341686 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.350099 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8c8mn"] Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.489514 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-utilities\") pod \"community-operators-8c8mn\" (UID: \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\") " pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.489986 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5bdn\" (UniqueName: \"kubernetes.io/projected/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-kube-api-access-b5bdn\") pod \"community-operators-8c8mn\" (UID: \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\") " pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.490202 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-catalog-content\") pod \"community-operators-8c8mn\" (UID: \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\") " pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.592503 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-utilities\") pod \"community-operators-8c8mn\" (UID: \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\") " pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.592594 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5bdn\" (UniqueName: \"kubernetes.io/projected/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-kube-api-access-b5bdn\") pod \"community-operators-8c8mn\" (UID: \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\") " pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.592656 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-catalog-content\") pod \"community-operators-8c8mn\" (UID: \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\") " pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.593000 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-utilities\") pod \"community-operators-8c8mn\" (UID: \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\") " pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.593036 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-catalog-content\") pod \"community-operators-8c8mn\" (UID: \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\") " pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.617550 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5bdn\" (UniqueName: \"kubernetes.io/projected/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-kube-api-access-b5bdn\") pod \"community-operators-8c8mn\" (UID: \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\") " pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.666632 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:07 crc kubenswrapper[4751]: I0130 22:02:07.261687 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8c8mn"] Jan 30 22:02:07 crc kubenswrapper[4751]: I0130 22:02:07.985476 4751 generic.go:334] "Generic (PLEG): container finished" podID="c9fecd57-6bc8-4fc8-b188-e885cefc9f84" containerID="ccb9407cf44f42b5a1446162192238689754a199e7871ca9d455ff5f30635f13" exitCode=0 Jan 30 22:02:07 crc kubenswrapper[4751]: I0130 22:02:07.988459 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8c8mn" event={"ID":"c9fecd57-6bc8-4fc8-b188-e885cefc9f84","Type":"ContainerDied","Data":"ccb9407cf44f42b5a1446162192238689754a199e7871ca9d455ff5f30635f13"} Jan 30 22:02:07 crc kubenswrapper[4751]: I0130 22:02:07.988500 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8c8mn" event={"ID":"c9fecd57-6bc8-4fc8-b188-e885cefc9f84","Type":"ContainerStarted","Data":"1e64c64ad003c23a57c893ae9b9bbce3beefa15f8e5c6d4a0e289ffc571f4bf7"} Jan 30 22:02:09 crc kubenswrapper[4751]: I0130 22:02:09.015044 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8c8mn" event={"ID":"c9fecd57-6bc8-4fc8-b188-e885cefc9f84","Type":"ContainerStarted","Data":"4da657573b29eb65b359b69a05969a511c97f8de01b2ddfe81969088430838dc"} Jan 30 22:02:12 crc kubenswrapper[4751]: I0130 22:02:12.043825 4751 generic.go:334] "Generic (PLEG): container finished" podID="c9fecd57-6bc8-4fc8-b188-e885cefc9f84" containerID="4da657573b29eb65b359b69a05969a511c97f8de01b2ddfe81969088430838dc" exitCode=0 Jan 30 22:02:12 crc kubenswrapper[4751]: I0130 22:02:12.043891 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8c8mn" event={"ID":"c9fecd57-6bc8-4fc8-b188-e885cefc9f84","Type":"ContainerDied","Data":"4da657573b29eb65b359b69a05969a511c97f8de01b2ddfe81969088430838dc"} Jan 30 22:02:13 crc kubenswrapper[4751]: I0130 22:02:13.059050 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8c8mn" event={"ID":"c9fecd57-6bc8-4fc8-b188-e885cefc9f84","Type":"ContainerStarted","Data":"d2e033294b8f105a12cb9c0c33a5f4368f4c160277191768b5f279863c6d7216"} Jan 30 22:02:13 crc kubenswrapper[4751]: I0130 22:02:13.097671 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8c8mn" podStartSLOduration=2.639238981 podStartE2EDuration="7.097648883s" podCreationTimestamp="2026-01-30 22:02:06 +0000 UTC" firstStartedPulling="2026-01-30 22:02:07.987560734 +0000 UTC m=+2866.733383383" lastFinishedPulling="2026-01-30 22:02:12.445970636 +0000 UTC m=+2871.191793285" observedRunningTime="2026-01-30 22:02:13.091648259 +0000 UTC m=+2871.837470908" watchObservedRunningTime="2026-01-30 22:02:13.097648883 +0000 UTC m=+2871.843471532" Jan 30 22:02:16 crc kubenswrapper[4751]: I0130 22:02:16.667944 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:16 crc kubenswrapper[4751]: I0130 22:02:16.668712 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:16 crc kubenswrapper[4751]: I0130 22:02:16.739723 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:17 crc kubenswrapper[4751]: I0130 22:02:17.205914 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:17 crc kubenswrapper[4751]: I0130 22:02:17.927496 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8c8mn"] Jan 30 22:02:19 crc kubenswrapper[4751]: I0130 22:02:19.121376 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8c8mn" podUID="c9fecd57-6bc8-4fc8-b188-e885cefc9f84" containerName="registry-server" containerID="cri-o://d2e033294b8f105a12cb9c0c33a5f4368f4c160277191768b5f279863c6d7216" gracePeriod=2 Jan 30 22:02:19 crc kubenswrapper[4751]: I0130 22:02:19.681299 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:19 crc kubenswrapper[4751]: I0130 22:02:19.806669 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5bdn\" (UniqueName: \"kubernetes.io/projected/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-kube-api-access-b5bdn\") pod \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\" (UID: \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\") " Jan 30 22:02:19 crc kubenswrapper[4751]: I0130 22:02:19.806746 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-catalog-content\") pod \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\" (UID: \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\") " Jan 30 22:02:19 crc kubenswrapper[4751]: I0130 22:02:19.806950 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-utilities\") pod \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\" (UID: \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\") " Jan 30 22:02:19 crc kubenswrapper[4751]: I0130 22:02:19.807805 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-utilities" (OuterVolumeSpecName: "utilities") pod "c9fecd57-6bc8-4fc8-b188-e885cefc9f84" (UID: "c9fecd57-6bc8-4fc8-b188-e885cefc9f84"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:02:19 crc kubenswrapper[4751]: I0130 22:02:19.811828 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-kube-api-access-b5bdn" (OuterVolumeSpecName: "kube-api-access-b5bdn") pod "c9fecd57-6bc8-4fc8-b188-e885cefc9f84" (UID: "c9fecd57-6bc8-4fc8-b188-e885cefc9f84"). InnerVolumeSpecName "kube-api-access-b5bdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:19 crc kubenswrapper[4751]: I0130 22:02:19.862791 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9fecd57-6bc8-4fc8-b188-e885cefc9f84" (UID: "c9fecd57-6bc8-4fc8-b188-e885cefc9f84"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:02:19 crc kubenswrapper[4751]: I0130 22:02:19.913570 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:19 crc kubenswrapper[4751]: I0130 22:02:19.913607 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5bdn\" (UniqueName: \"kubernetes.io/projected/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-kube-api-access-b5bdn\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:19 crc kubenswrapper[4751]: I0130 22:02:19.913618 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.162664 4751 generic.go:334] "Generic (PLEG): container finished" podID="c9fecd57-6bc8-4fc8-b188-e885cefc9f84" containerID="d2e033294b8f105a12cb9c0c33a5f4368f4c160277191768b5f279863c6d7216" exitCode=0 Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.162724 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8c8mn" event={"ID":"c9fecd57-6bc8-4fc8-b188-e885cefc9f84","Type":"ContainerDied","Data":"d2e033294b8f105a12cb9c0c33a5f4368f4c160277191768b5f279863c6d7216"} Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.162816 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8c8mn" event={"ID":"c9fecd57-6bc8-4fc8-b188-e885cefc9f84","Type":"ContainerDied","Data":"1e64c64ad003c23a57c893ae9b9bbce3beefa15f8e5c6d4a0e289ffc571f4bf7"} Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.162759 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.162842 4751 scope.go:117] "RemoveContainer" containerID="d2e033294b8f105a12cb9c0c33a5f4368f4c160277191768b5f279863c6d7216" Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.205605 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8c8mn"] Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.208527 4751 scope.go:117] "RemoveContainer" containerID="4da657573b29eb65b359b69a05969a511c97f8de01b2ddfe81969088430838dc" Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.219049 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8c8mn"] Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.251545 4751 scope.go:117] "RemoveContainer" containerID="ccb9407cf44f42b5a1446162192238689754a199e7871ca9d455ff5f30635f13" Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.319518 4751 scope.go:117] "RemoveContainer" containerID="d2e033294b8f105a12cb9c0c33a5f4368f4c160277191768b5f279863c6d7216" Jan 30 22:02:20 crc kubenswrapper[4751]: E0130 22:02:20.325034 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2e033294b8f105a12cb9c0c33a5f4368f4c160277191768b5f279863c6d7216\": container with ID starting with d2e033294b8f105a12cb9c0c33a5f4368f4c160277191768b5f279863c6d7216 not found: ID does not exist" containerID="d2e033294b8f105a12cb9c0c33a5f4368f4c160277191768b5f279863c6d7216" Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.325074 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2e033294b8f105a12cb9c0c33a5f4368f4c160277191768b5f279863c6d7216"} err="failed to get container status \"d2e033294b8f105a12cb9c0c33a5f4368f4c160277191768b5f279863c6d7216\": rpc error: code = NotFound desc = could not find container \"d2e033294b8f105a12cb9c0c33a5f4368f4c160277191768b5f279863c6d7216\": container with ID starting with d2e033294b8f105a12cb9c0c33a5f4368f4c160277191768b5f279863c6d7216 not found: ID does not exist" Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.325100 4751 scope.go:117] "RemoveContainer" containerID="4da657573b29eb65b359b69a05969a511c97f8de01b2ddfe81969088430838dc" Jan 30 22:02:20 crc kubenswrapper[4751]: E0130 22:02:20.325503 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4da657573b29eb65b359b69a05969a511c97f8de01b2ddfe81969088430838dc\": container with ID starting with 4da657573b29eb65b359b69a05969a511c97f8de01b2ddfe81969088430838dc not found: ID does not exist" containerID="4da657573b29eb65b359b69a05969a511c97f8de01b2ddfe81969088430838dc" Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.325527 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4da657573b29eb65b359b69a05969a511c97f8de01b2ddfe81969088430838dc"} err="failed to get container status \"4da657573b29eb65b359b69a05969a511c97f8de01b2ddfe81969088430838dc\": rpc error: code = NotFound desc = could not find container \"4da657573b29eb65b359b69a05969a511c97f8de01b2ddfe81969088430838dc\": container with ID starting with 4da657573b29eb65b359b69a05969a511c97f8de01b2ddfe81969088430838dc not found: ID does not exist" Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.325541 4751 scope.go:117] "RemoveContainer" containerID="ccb9407cf44f42b5a1446162192238689754a199e7871ca9d455ff5f30635f13" Jan 30 22:02:20 crc kubenswrapper[4751]: E0130 22:02:20.325766 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccb9407cf44f42b5a1446162192238689754a199e7871ca9d455ff5f30635f13\": container with ID starting with ccb9407cf44f42b5a1446162192238689754a199e7871ca9d455ff5f30635f13 not found: ID does not exist" containerID="ccb9407cf44f42b5a1446162192238689754a199e7871ca9d455ff5f30635f13" Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.325784 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccb9407cf44f42b5a1446162192238689754a199e7871ca9d455ff5f30635f13"} err="failed to get container status \"ccb9407cf44f42b5a1446162192238689754a199e7871ca9d455ff5f30635f13\": rpc error: code = NotFound desc = could not find container \"ccb9407cf44f42b5a1446162192238689754a199e7871ca9d455ff5f30635f13\": container with ID starting with ccb9407cf44f42b5a1446162192238689754a199e7871ca9d455ff5f30635f13 not found: ID does not exist" Jan 30 22:02:21 crc kubenswrapper[4751]: I0130 22:02:21.996412 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9fecd57-6bc8-4fc8-b188-e885cefc9f84" path="/var/lib/kubelet/pods/c9fecd57-6bc8-4fc8-b188-e885cefc9f84/volumes" Jan 30 22:02:24 crc kubenswrapper[4751]: I0130 22:02:24.127063 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:02:24 crc kubenswrapper[4751]: I0130 22:02:24.127672 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:02:24 crc kubenswrapper[4751]: I0130 22:02:24.127722 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 22:02:24 crc kubenswrapper[4751]: I0130 22:02:24.128828 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:02:24 crc kubenswrapper[4751]: I0130 22:02:24.128895 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" gracePeriod=600 Jan 30 22:02:24 crc kubenswrapper[4751]: E0130 22:02:24.266618 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:02:25 crc kubenswrapper[4751]: I0130 22:02:25.237952 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" exitCode=0 Jan 30 22:02:25 crc kubenswrapper[4751]: I0130 22:02:25.238043 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0"} Jan 30 22:02:25 crc kubenswrapper[4751]: I0130 22:02:25.238267 4751 scope.go:117] "RemoveContainer" containerID="72b97ad134c710248fa542e7f2b4ac03f85859885b9de5fb88903a4ed9925d19" Jan 30 22:02:25 crc kubenswrapper[4751]: I0130 22:02:25.239018 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:02:25 crc kubenswrapper[4751]: E0130 22:02:25.239291 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:02:29 crc kubenswrapper[4751]: I0130 22:02:29.294413 4751 generic.go:334] "Generic (PLEG): container finished" podID="7165caae-e471-463b-9f66-be7fb4c7c463" containerID="4286082d44313eb3c72a73df8ca40afdcf8a7a69e10559ba7896eeef94d61e37" exitCode=0 Jan 30 22:02:29 crc kubenswrapper[4751]: I0130 22:02:29.294515 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" event={"ID":"7165caae-e471-463b-9f66-be7fb4c7c463","Type":"ContainerDied","Data":"4286082d44313eb3c72a73df8ca40afdcf8a7a69e10559ba7896eeef94d61e37"} Jan 30 22:02:30 crc kubenswrapper[4751]: I0130 22:02:30.870822 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:02:30 crc kubenswrapper[4751]: I0130 22:02:30.993316 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbpmt\" (UniqueName: \"kubernetes.io/projected/7165caae-e471-463b-9f66-be7fb4c7c463-kube-api-access-cbpmt\") pod \"7165caae-e471-463b-9f66-be7fb4c7c463\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " Jan 30 22:02:30 crc kubenswrapper[4751]: I0130 22:02:30.993420 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-cell1-compute-config-1\") pod \"7165caae-e471-463b-9f66-be7fb4c7c463\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " Jan 30 22:02:30 crc kubenswrapper[4751]: I0130 22:02:30.993452 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7165caae-e471-463b-9f66-be7fb4c7c463-nova-extra-config-0\") pod \"7165caae-e471-463b-9f66-be7fb4c7c463\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " Jan 30 22:02:30 crc kubenswrapper[4751]: I0130 22:02:30.993480 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-migration-ssh-key-0\") pod \"7165caae-e471-463b-9f66-be7fb4c7c463\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " Jan 30 22:02:30 crc kubenswrapper[4751]: I0130 22:02:30.993660 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-inventory\") pod \"7165caae-e471-463b-9f66-be7fb4c7c463\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " Jan 30 22:02:30 crc kubenswrapper[4751]: I0130 22:02:30.993727 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-combined-ca-bundle\") pod \"7165caae-e471-463b-9f66-be7fb4c7c463\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " Jan 30 22:02:30 crc kubenswrapper[4751]: I0130 22:02:30.993796 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-migration-ssh-key-1\") pod \"7165caae-e471-463b-9f66-be7fb4c7c463\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " Jan 30 22:02:30 crc kubenswrapper[4751]: I0130 22:02:30.993815 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-cell1-compute-config-0\") pod \"7165caae-e471-463b-9f66-be7fb4c7c463\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " Jan 30 22:02:30 crc kubenswrapper[4751]: I0130 22:02:30.993844 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-ssh-key-openstack-edpm-ipam\") pod \"7165caae-e471-463b-9f66-be7fb4c7c463\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:30.999952 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "7165caae-e471-463b-9f66-be7fb4c7c463" (UID: "7165caae-e471-463b-9f66-be7fb4c7c463"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:30.999982 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7165caae-e471-463b-9f66-be7fb4c7c463-kube-api-access-cbpmt" (OuterVolumeSpecName: "kube-api-access-cbpmt") pod "7165caae-e471-463b-9f66-be7fb4c7c463" (UID: "7165caae-e471-463b-9f66-be7fb4c7c463"). InnerVolumeSpecName "kube-api-access-cbpmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.026825 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "7165caae-e471-463b-9f66-be7fb4c7c463" (UID: "7165caae-e471-463b-9f66-be7fb4c7c463"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.039199 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7165caae-e471-463b-9f66-be7fb4c7c463-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "7165caae-e471-463b-9f66-be7fb4c7c463" (UID: "7165caae-e471-463b-9f66-be7fb4c7c463"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.047656 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7165caae-e471-463b-9f66-be7fb4c7c463" (UID: "7165caae-e471-463b-9f66-be7fb4c7c463"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.047696 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "7165caae-e471-463b-9f66-be7fb4c7c463" (UID: "7165caae-e471-463b-9f66-be7fb4c7c463"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.047755 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-inventory" (OuterVolumeSpecName: "inventory") pod "7165caae-e471-463b-9f66-be7fb4c7c463" (UID: "7165caae-e471-463b-9f66-be7fb4c7c463"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.047770 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "7165caae-e471-463b-9f66-be7fb4c7c463" (UID: "7165caae-e471-463b-9f66-be7fb4c7c463"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.049473 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "7165caae-e471-463b-9f66-be7fb4c7c463" (UID: "7165caae-e471-463b-9f66-be7fb4c7c463"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.098238 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.098268 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbpmt\" (UniqueName: \"kubernetes.io/projected/7165caae-e471-463b-9f66-be7fb4c7c463-kube-api-access-cbpmt\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.098279 4751 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.098290 4751 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7165caae-e471-463b-9f66-be7fb4c7c463-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.098300 4751 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.098313 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.098321 4751 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.098349 4751 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.098359 4751 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.315480 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" event={"ID":"7165caae-e471-463b-9f66-be7fb4c7c463","Type":"ContainerDied","Data":"c6452be326750e2048619c44dd9a07b67cfa00b1469a8de0a0fe91a76b4bd302"} Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.315731 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6452be326750e2048619c44dd9a07b67cfa00b1469a8de0a0fe91a76b4bd302" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.315530 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.428264 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx"] Jan 30 22:02:31 crc kubenswrapper[4751]: E0130 22:02:31.428725 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7165caae-e471-463b-9f66-be7fb4c7c463" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.428744 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7165caae-e471-463b-9f66-be7fb4c7c463" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 22:02:31 crc kubenswrapper[4751]: E0130 22:02:31.428759 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9fecd57-6bc8-4fc8-b188-e885cefc9f84" containerName="extract-content" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.428767 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9fecd57-6bc8-4fc8-b188-e885cefc9f84" containerName="extract-content" Jan 30 22:02:31 crc kubenswrapper[4751]: E0130 22:02:31.428790 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9fecd57-6bc8-4fc8-b188-e885cefc9f84" containerName="extract-utilities" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.428796 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9fecd57-6bc8-4fc8-b188-e885cefc9f84" containerName="extract-utilities" Jan 30 22:02:31 crc kubenswrapper[4751]: E0130 22:02:31.428815 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9fecd57-6bc8-4fc8-b188-e885cefc9f84" containerName="registry-server" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.428821 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9fecd57-6bc8-4fc8-b188-e885cefc9f84" containerName="registry-server" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.429034 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7165caae-e471-463b-9f66-be7fb4c7c463" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.429054 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9fecd57-6bc8-4fc8-b188-e885cefc9f84" containerName="registry-server" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.430320 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.441912 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.441991 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.441922 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.442239 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.442419 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.452788 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx"] Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.508272 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.508718 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.508854 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.508986 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.509139 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7qg6\" (UniqueName: \"kubernetes.io/projected/93c2956e-910c-4604-a9ba-86289f854a59-kube-api-access-m7qg6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.509294 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.509541 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.611388 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.611695 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.611898 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.611983 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.612083 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.612200 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7qg6\" (UniqueName: \"kubernetes.io/projected/93c2956e-910c-4604-a9ba-86289f854a59-kube-api-access-m7qg6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.612293 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.616238 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.616376 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.616771 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.617035 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.617807 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.619948 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.639010 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7qg6\" (UniqueName: \"kubernetes.io/projected/93c2956e-910c-4604-a9ba-86289f854a59-kube-api-access-m7qg6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.757507 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:32 crc kubenswrapper[4751]: I0130 22:02:32.445155 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx"] Jan 30 22:02:33 crc kubenswrapper[4751]: I0130 22:02:33.339538 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" event={"ID":"93c2956e-910c-4604-a9ba-86289f854a59","Type":"ContainerStarted","Data":"bfa35e5976d0f1134073dbb445bffda8319a9310e44f3019d580e15386f4c974"} Jan 30 22:02:34 crc kubenswrapper[4751]: I0130 22:02:34.351843 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" event={"ID":"93c2956e-910c-4604-a9ba-86289f854a59","Type":"ContainerStarted","Data":"e561a4f54ddbb4f307aca022b6eee2605073ff32bfb5b070e34d1d12cbb217a9"} Jan 30 22:02:34 crc kubenswrapper[4751]: I0130 22:02:34.380745 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" podStartSLOduration=2.56553063 podStartE2EDuration="3.380726408s" podCreationTimestamp="2026-01-30 22:02:31 +0000 UTC" firstStartedPulling="2026-01-30 22:02:32.441273162 +0000 UTC m=+2891.187095821" lastFinishedPulling="2026-01-30 22:02:33.25646895 +0000 UTC m=+2892.002291599" observedRunningTime="2026-01-30 22:02:34.376265096 +0000 UTC m=+2893.122087745" watchObservedRunningTime="2026-01-30 22:02:34.380726408 +0000 UTC m=+2893.126549047" Jan 30 22:02:40 crc kubenswrapper[4751]: I0130 22:02:40.976607 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:02:40 crc kubenswrapper[4751]: E0130 22:02:40.977675 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:02:55 crc kubenswrapper[4751]: I0130 22:02:55.976685 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:02:55 crc kubenswrapper[4751]: E0130 22:02:55.977646 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:03:10 crc kubenswrapper[4751]: I0130 22:03:10.976103 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:03:10 crc kubenswrapper[4751]: E0130 22:03:10.976860 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:03:24 crc kubenswrapper[4751]: I0130 22:03:24.975837 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:03:24 crc kubenswrapper[4751]: E0130 22:03:24.976660 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:03:34 crc kubenswrapper[4751]: I0130 22:03:34.704475 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6htrn"] Jan 30 22:03:34 crc kubenswrapper[4751]: I0130 22:03:34.709090 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:34 crc kubenswrapper[4751]: I0130 22:03:34.721006 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6htrn"] Jan 30 22:03:34 crc kubenswrapper[4751]: I0130 22:03:34.840210 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/309930f6-c8a8-487c-b74e-d2010aedd851-utilities\") pod \"redhat-marketplace-6htrn\" (UID: \"309930f6-c8a8-487c-b74e-d2010aedd851\") " pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:34 crc kubenswrapper[4751]: I0130 22:03:34.840653 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv6cj\" (UniqueName: \"kubernetes.io/projected/309930f6-c8a8-487c-b74e-d2010aedd851-kube-api-access-sv6cj\") pod \"redhat-marketplace-6htrn\" (UID: \"309930f6-c8a8-487c-b74e-d2010aedd851\") " pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:34 crc kubenswrapper[4751]: I0130 22:03:34.841024 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/309930f6-c8a8-487c-b74e-d2010aedd851-catalog-content\") pod \"redhat-marketplace-6htrn\" (UID: \"309930f6-c8a8-487c-b74e-d2010aedd851\") " pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:34 crc kubenswrapper[4751]: I0130 22:03:34.943510 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv6cj\" (UniqueName: \"kubernetes.io/projected/309930f6-c8a8-487c-b74e-d2010aedd851-kube-api-access-sv6cj\") pod \"redhat-marketplace-6htrn\" (UID: \"309930f6-c8a8-487c-b74e-d2010aedd851\") " pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:34 crc kubenswrapper[4751]: I0130 22:03:34.943764 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/309930f6-c8a8-487c-b74e-d2010aedd851-catalog-content\") pod \"redhat-marketplace-6htrn\" (UID: \"309930f6-c8a8-487c-b74e-d2010aedd851\") " pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:34 crc kubenswrapper[4751]: I0130 22:03:34.943834 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/309930f6-c8a8-487c-b74e-d2010aedd851-utilities\") pod \"redhat-marketplace-6htrn\" (UID: \"309930f6-c8a8-487c-b74e-d2010aedd851\") " pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:34 crc kubenswrapper[4751]: I0130 22:03:34.944379 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/309930f6-c8a8-487c-b74e-d2010aedd851-catalog-content\") pod \"redhat-marketplace-6htrn\" (UID: \"309930f6-c8a8-487c-b74e-d2010aedd851\") " pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:34 crc kubenswrapper[4751]: I0130 22:03:34.944553 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/309930f6-c8a8-487c-b74e-d2010aedd851-utilities\") pod \"redhat-marketplace-6htrn\" (UID: \"309930f6-c8a8-487c-b74e-d2010aedd851\") " pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:34 crc kubenswrapper[4751]: I0130 22:03:34.967390 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv6cj\" (UniqueName: \"kubernetes.io/projected/309930f6-c8a8-487c-b74e-d2010aedd851-kube-api-access-sv6cj\") pod \"redhat-marketplace-6htrn\" (UID: \"309930f6-c8a8-487c-b74e-d2010aedd851\") " pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:35 crc kubenswrapper[4751]: I0130 22:03:35.038358 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:35 crc kubenswrapper[4751]: I0130 22:03:35.612751 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6htrn"] Jan 30 22:03:35 crc kubenswrapper[4751]: I0130 22:03:35.996511 4751 generic.go:334] "Generic (PLEG): container finished" podID="309930f6-c8a8-487c-b74e-d2010aedd851" containerID="3bb2f30e9f5e457827fa1b8e8ef9a30b3f8039a316885c2e4f8cec4ff209340d" exitCode=0 Jan 30 22:03:35 crc kubenswrapper[4751]: I0130 22:03:35.996572 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6htrn" event={"ID":"309930f6-c8a8-487c-b74e-d2010aedd851","Type":"ContainerDied","Data":"3bb2f30e9f5e457827fa1b8e8ef9a30b3f8039a316885c2e4f8cec4ff209340d"} Jan 30 22:03:35 crc kubenswrapper[4751]: I0130 22:03:35.996605 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6htrn" event={"ID":"309930f6-c8a8-487c-b74e-d2010aedd851","Type":"ContainerStarted","Data":"4a229657d088af7f49d719c2a6ef1d4ee422502663ee61b387d5d7457dd49ff6"} Jan 30 22:03:37 crc kubenswrapper[4751]: I0130 22:03:37.976744 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:03:37 crc kubenswrapper[4751]: E0130 22:03:37.977736 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:03:38 crc kubenswrapper[4751]: I0130 22:03:38.018601 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6htrn" event={"ID":"309930f6-c8a8-487c-b74e-d2010aedd851","Type":"ContainerStarted","Data":"d6313efb473c749da2977acae53eaefcca7fa3fb2484d1ce778ae6d294160a53"} Jan 30 22:03:39 crc kubenswrapper[4751]: I0130 22:03:39.032146 4751 generic.go:334] "Generic (PLEG): container finished" podID="309930f6-c8a8-487c-b74e-d2010aedd851" containerID="d6313efb473c749da2977acae53eaefcca7fa3fb2484d1ce778ae6d294160a53" exitCode=0 Jan 30 22:03:39 crc kubenswrapper[4751]: I0130 22:03:39.032535 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6htrn" event={"ID":"309930f6-c8a8-487c-b74e-d2010aedd851","Type":"ContainerDied","Data":"d6313efb473c749da2977acae53eaefcca7fa3fb2484d1ce778ae6d294160a53"} Jan 30 22:03:40 crc kubenswrapper[4751]: I0130 22:03:40.045614 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6htrn" event={"ID":"309930f6-c8a8-487c-b74e-d2010aedd851","Type":"ContainerStarted","Data":"21a2f8fd1cd2e2d5667a4a2ae04fbcc78cd97684cee177cd8db4d7ba720b4317"} Jan 30 22:03:40 crc kubenswrapper[4751]: I0130 22:03:40.077833 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6htrn" podStartSLOduration=2.344143574 podStartE2EDuration="6.077811602s" podCreationTimestamp="2026-01-30 22:03:34 +0000 UTC" firstStartedPulling="2026-01-30 22:03:35.998476244 +0000 UTC m=+2954.744298903" lastFinishedPulling="2026-01-30 22:03:39.732144272 +0000 UTC m=+2958.477966931" observedRunningTime="2026-01-30 22:03:40.075317184 +0000 UTC m=+2958.821139833" watchObservedRunningTime="2026-01-30 22:03:40.077811602 +0000 UTC m=+2958.823634251" Jan 30 22:03:45 crc kubenswrapper[4751]: I0130 22:03:45.039380 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:45 crc kubenswrapper[4751]: I0130 22:03:45.039980 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:45 crc kubenswrapper[4751]: I0130 22:03:45.103759 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:45 crc kubenswrapper[4751]: I0130 22:03:45.168773 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:45 crc kubenswrapper[4751]: I0130 22:03:45.344688 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6htrn"] Jan 30 22:03:47 crc kubenswrapper[4751]: I0130 22:03:47.129307 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6htrn" podUID="309930f6-c8a8-487c-b74e-d2010aedd851" containerName="registry-server" containerID="cri-o://21a2f8fd1cd2e2d5667a4a2ae04fbcc78cd97684cee177cd8db4d7ba720b4317" gracePeriod=2 Jan 30 22:03:47 crc kubenswrapper[4751]: I0130 22:03:47.656293 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:47 crc kubenswrapper[4751]: I0130 22:03:47.714683 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/309930f6-c8a8-487c-b74e-d2010aedd851-utilities\") pod \"309930f6-c8a8-487c-b74e-d2010aedd851\" (UID: \"309930f6-c8a8-487c-b74e-d2010aedd851\") " Jan 30 22:03:47 crc kubenswrapper[4751]: I0130 22:03:47.714885 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/309930f6-c8a8-487c-b74e-d2010aedd851-catalog-content\") pod \"309930f6-c8a8-487c-b74e-d2010aedd851\" (UID: \"309930f6-c8a8-487c-b74e-d2010aedd851\") " Jan 30 22:03:47 crc kubenswrapper[4751]: I0130 22:03:47.714952 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv6cj\" (UniqueName: \"kubernetes.io/projected/309930f6-c8a8-487c-b74e-d2010aedd851-kube-api-access-sv6cj\") pod \"309930f6-c8a8-487c-b74e-d2010aedd851\" (UID: \"309930f6-c8a8-487c-b74e-d2010aedd851\") " Jan 30 22:03:47 crc kubenswrapper[4751]: I0130 22:03:47.715582 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/309930f6-c8a8-487c-b74e-d2010aedd851-utilities" (OuterVolumeSpecName: "utilities") pod "309930f6-c8a8-487c-b74e-d2010aedd851" (UID: "309930f6-c8a8-487c-b74e-d2010aedd851"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:47 crc kubenswrapper[4751]: I0130 22:03:47.720075 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/309930f6-c8a8-487c-b74e-d2010aedd851-kube-api-access-sv6cj" (OuterVolumeSpecName: "kube-api-access-sv6cj") pod "309930f6-c8a8-487c-b74e-d2010aedd851" (UID: "309930f6-c8a8-487c-b74e-d2010aedd851"). InnerVolumeSpecName "kube-api-access-sv6cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:47 crc kubenswrapper[4751]: I0130 22:03:47.739892 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/309930f6-c8a8-487c-b74e-d2010aedd851-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "309930f6-c8a8-487c-b74e-d2010aedd851" (UID: "309930f6-c8a8-487c-b74e-d2010aedd851"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:47 crc kubenswrapper[4751]: I0130 22:03:47.818026 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/309930f6-c8a8-487c-b74e-d2010aedd851-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:47 crc kubenswrapper[4751]: I0130 22:03:47.818061 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv6cj\" (UniqueName: \"kubernetes.io/projected/309930f6-c8a8-487c-b74e-d2010aedd851-kube-api-access-sv6cj\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:47 crc kubenswrapper[4751]: I0130 22:03:47.818071 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/309930f6-c8a8-487c-b74e-d2010aedd851-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.142558 4751 generic.go:334] "Generic (PLEG): container finished" podID="309930f6-c8a8-487c-b74e-d2010aedd851" containerID="21a2f8fd1cd2e2d5667a4a2ae04fbcc78cd97684cee177cd8db4d7ba720b4317" exitCode=0 Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.142608 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.142656 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6htrn" event={"ID":"309930f6-c8a8-487c-b74e-d2010aedd851","Type":"ContainerDied","Data":"21a2f8fd1cd2e2d5667a4a2ae04fbcc78cd97684cee177cd8db4d7ba720b4317"} Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.143011 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6htrn" event={"ID":"309930f6-c8a8-487c-b74e-d2010aedd851","Type":"ContainerDied","Data":"4a229657d088af7f49d719c2a6ef1d4ee422502663ee61b387d5d7457dd49ff6"} Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.143042 4751 scope.go:117] "RemoveContainer" containerID="21a2f8fd1cd2e2d5667a4a2ae04fbcc78cd97684cee177cd8db4d7ba720b4317" Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.189084 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6htrn"] Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.189254 4751 scope.go:117] "RemoveContainer" containerID="d6313efb473c749da2977acae53eaefcca7fa3fb2484d1ce778ae6d294160a53" Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.205451 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6htrn"] Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.215755 4751 scope.go:117] "RemoveContainer" containerID="3bb2f30e9f5e457827fa1b8e8ef9a30b3f8039a316885c2e4f8cec4ff209340d" Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.288448 4751 scope.go:117] "RemoveContainer" containerID="21a2f8fd1cd2e2d5667a4a2ae04fbcc78cd97684cee177cd8db4d7ba720b4317" Jan 30 22:03:48 crc kubenswrapper[4751]: E0130 22:03:48.289588 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21a2f8fd1cd2e2d5667a4a2ae04fbcc78cd97684cee177cd8db4d7ba720b4317\": container with ID starting with 21a2f8fd1cd2e2d5667a4a2ae04fbcc78cd97684cee177cd8db4d7ba720b4317 not found: ID does not exist" containerID="21a2f8fd1cd2e2d5667a4a2ae04fbcc78cd97684cee177cd8db4d7ba720b4317" Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.289620 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21a2f8fd1cd2e2d5667a4a2ae04fbcc78cd97684cee177cd8db4d7ba720b4317"} err="failed to get container status \"21a2f8fd1cd2e2d5667a4a2ae04fbcc78cd97684cee177cd8db4d7ba720b4317\": rpc error: code = NotFound desc = could not find container \"21a2f8fd1cd2e2d5667a4a2ae04fbcc78cd97684cee177cd8db4d7ba720b4317\": container with ID starting with 21a2f8fd1cd2e2d5667a4a2ae04fbcc78cd97684cee177cd8db4d7ba720b4317 not found: ID does not exist" Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.289642 4751 scope.go:117] "RemoveContainer" containerID="d6313efb473c749da2977acae53eaefcca7fa3fb2484d1ce778ae6d294160a53" Jan 30 22:03:48 crc kubenswrapper[4751]: E0130 22:03:48.289971 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6313efb473c749da2977acae53eaefcca7fa3fb2484d1ce778ae6d294160a53\": container with ID starting with d6313efb473c749da2977acae53eaefcca7fa3fb2484d1ce778ae6d294160a53 not found: ID does not exist" containerID="d6313efb473c749da2977acae53eaefcca7fa3fb2484d1ce778ae6d294160a53" Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.290023 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6313efb473c749da2977acae53eaefcca7fa3fb2484d1ce778ae6d294160a53"} err="failed to get container status \"d6313efb473c749da2977acae53eaefcca7fa3fb2484d1ce778ae6d294160a53\": rpc error: code = NotFound desc = could not find container \"d6313efb473c749da2977acae53eaefcca7fa3fb2484d1ce778ae6d294160a53\": container with ID starting with d6313efb473c749da2977acae53eaefcca7fa3fb2484d1ce778ae6d294160a53 not found: ID does not exist" Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.290057 4751 scope.go:117] "RemoveContainer" containerID="3bb2f30e9f5e457827fa1b8e8ef9a30b3f8039a316885c2e4f8cec4ff209340d" Jan 30 22:03:48 crc kubenswrapper[4751]: E0130 22:03:48.290819 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bb2f30e9f5e457827fa1b8e8ef9a30b3f8039a316885c2e4f8cec4ff209340d\": container with ID starting with 3bb2f30e9f5e457827fa1b8e8ef9a30b3f8039a316885c2e4f8cec4ff209340d not found: ID does not exist" containerID="3bb2f30e9f5e457827fa1b8e8ef9a30b3f8039a316885c2e4f8cec4ff209340d" Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.290882 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bb2f30e9f5e457827fa1b8e8ef9a30b3f8039a316885c2e4f8cec4ff209340d"} err="failed to get container status \"3bb2f30e9f5e457827fa1b8e8ef9a30b3f8039a316885c2e4f8cec4ff209340d\": rpc error: code = NotFound desc = could not find container \"3bb2f30e9f5e457827fa1b8e8ef9a30b3f8039a316885c2e4f8cec4ff209340d\": container with ID starting with 3bb2f30e9f5e457827fa1b8e8ef9a30b3f8039a316885c2e4f8cec4ff209340d not found: ID does not exist" Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.975461 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:03:48 crc kubenswrapper[4751]: E0130 22:03:48.976003 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:03:49 crc kubenswrapper[4751]: I0130 22:03:49.994022 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="309930f6-c8a8-487c-b74e-d2010aedd851" path="/var/lib/kubelet/pods/309930f6-c8a8-487c-b74e-d2010aedd851/volumes" Jan 30 22:04:02 crc kubenswrapper[4751]: I0130 22:04:02.977058 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:04:02 crc kubenswrapper[4751]: E0130 22:04:02.977865 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:04:14 crc kubenswrapper[4751]: I0130 22:04:14.976714 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:04:14 crc kubenswrapper[4751]: E0130 22:04:14.977945 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:04:26 crc kubenswrapper[4751]: I0130 22:04:26.975714 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:04:26 crc kubenswrapper[4751]: E0130 22:04:26.976640 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:04:37 crc kubenswrapper[4751]: I0130 22:04:37.976930 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:04:37 crc kubenswrapper[4751]: E0130 22:04:37.977839 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:04:48 crc kubenswrapper[4751]: E0130 22:04:48.256548 4751 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93c2956e_910c_4604_a9ba_86289f854a59.slice/crio-e561a4f54ddbb4f307aca022b6eee2605073ff32bfb5b070e34d1d12cbb217a9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93c2956e_910c_4604_a9ba_86289f854a59.slice/crio-conmon-e561a4f54ddbb4f307aca022b6eee2605073ff32bfb5b070e34d1d12cbb217a9.scope\": RecentStats: unable to find data in memory cache]" Jan 30 22:04:48 crc kubenswrapper[4751]: E0130 22:04:48.256688 4751 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93c2956e_910c_4604_a9ba_86289f854a59.slice/crio-e561a4f54ddbb4f307aca022b6eee2605073ff32bfb5b070e34d1d12cbb217a9.scope\": RecentStats: unable to find data in memory cache]" Jan 30 22:04:48 crc kubenswrapper[4751]: I0130 22:04:48.791290 4751 generic.go:334] "Generic (PLEG): container finished" podID="93c2956e-910c-4604-a9ba-86289f854a59" containerID="e561a4f54ddbb4f307aca022b6eee2605073ff32bfb5b070e34d1d12cbb217a9" exitCode=0 Jan 30 22:04:48 crc kubenswrapper[4751]: I0130 22:04:48.791377 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" event={"ID":"93c2956e-910c-4604-a9ba-86289f854a59","Type":"ContainerDied","Data":"e561a4f54ddbb4f307aca022b6eee2605073ff32bfb5b070e34d1d12cbb217a9"} Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.322638 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.482963 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-0\") pod \"93c2956e-910c-4604-a9ba-86289f854a59\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.483037 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-inventory\") pod \"93c2956e-910c-4604-a9ba-86289f854a59\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.483098 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ssh-key-openstack-edpm-ipam\") pod \"93c2956e-910c-4604-a9ba-86289f854a59\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.483274 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7qg6\" (UniqueName: \"kubernetes.io/projected/93c2956e-910c-4604-a9ba-86289f854a59-kube-api-access-m7qg6\") pod \"93c2956e-910c-4604-a9ba-86289f854a59\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.483377 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-telemetry-combined-ca-bundle\") pod \"93c2956e-910c-4604-a9ba-86289f854a59\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.483448 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-1\") pod \"93c2956e-910c-4604-a9ba-86289f854a59\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.483514 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-2\") pod \"93c2956e-910c-4604-a9ba-86289f854a59\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.490692 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93c2956e-910c-4604-a9ba-86289f854a59-kube-api-access-m7qg6" (OuterVolumeSpecName: "kube-api-access-m7qg6") pod "93c2956e-910c-4604-a9ba-86289f854a59" (UID: "93c2956e-910c-4604-a9ba-86289f854a59"). InnerVolumeSpecName "kube-api-access-m7qg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.497544 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "93c2956e-910c-4604-a9ba-86289f854a59" (UID: "93c2956e-910c-4604-a9ba-86289f854a59"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.519090 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "93c2956e-910c-4604-a9ba-86289f854a59" (UID: "93c2956e-910c-4604-a9ba-86289f854a59"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.521746 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "93c2956e-910c-4604-a9ba-86289f854a59" (UID: "93c2956e-910c-4604-a9ba-86289f854a59"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.522699 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-inventory" (OuterVolumeSpecName: "inventory") pod "93c2956e-910c-4604-a9ba-86289f854a59" (UID: "93c2956e-910c-4604-a9ba-86289f854a59"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.526217 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "93c2956e-910c-4604-a9ba-86289f854a59" (UID: "93c2956e-910c-4604-a9ba-86289f854a59"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.530119 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "93c2956e-910c-4604-a9ba-86289f854a59" (UID: "93c2956e-910c-4604-a9ba-86289f854a59"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.588101 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7qg6\" (UniqueName: \"kubernetes.io/projected/93c2956e-910c-4604-a9ba-86289f854a59-kube-api-access-m7qg6\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.588157 4751 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.588171 4751 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.588185 4751 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.588198 4751 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.588212 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.588226 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.817016 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" event={"ID":"93c2956e-910c-4604-a9ba-86289f854a59","Type":"ContainerDied","Data":"bfa35e5976d0f1134073dbb445bffda8319a9310e44f3019d580e15386f4c974"} Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.817515 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfa35e5976d0f1134073dbb445bffda8319a9310e44f3019d580e15386f4c974" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.817068 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.917806 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v"] Jan 30 22:04:50 crc kubenswrapper[4751]: E0130 22:04:50.918402 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c2956e-910c-4604-a9ba-86289f854a59" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.918418 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c2956e-910c-4604-a9ba-86289f854a59" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 22:04:50 crc kubenswrapper[4751]: E0130 22:04:50.918440 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="309930f6-c8a8-487c-b74e-d2010aedd851" containerName="extract-utilities" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.918446 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="309930f6-c8a8-487c-b74e-d2010aedd851" containerName="extract-utilities" Jan 30 22:04:50 crc kubenswrapper[4751]: E0130 22:04:50.918463 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="309930f6-c8a8-487c-b74e-d2010aedd851" containerName="registry-server" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.918468 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="309930f6-c8a8-487c-b74e-d2010aedd851" containerName="registry-server" Jan 30 22:04:50 crc kubenswrapper[4751]: E0130 22:04:50.918480 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="309930f6-c8a8-487c-b74e-d2010aedd851" containerName="extract-content" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.918552 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="309930f6-c8a8-487c-b74e-d2010aedd851" containerName="extract-content" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.918843 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="309930f6-c8a8-487c-b74e-d2010aedd851" containerName="registry-server" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.918871 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="93c2956e-910c-4604-a9ba-86289f854a59" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.919732 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.923819 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.923937 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.924059 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.924126 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.930285 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.930925 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v"] Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.104007 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.104242 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5clxf\" (UniqueName: \"kubernetes.io/projected/ac636140-8b68-474a-a7f9-7d46e6a22de0-kube-api-access-5clxf\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.104517 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.104675 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.105484 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.105817 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.105965 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.210629 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.210933 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.211224 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.211450 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5clxf\" (UniqueName: \"kubernetes.io/projected/ac636140-8b68-474a-a7f9-7d46e6a22de0-kube-api-access-5clxf\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.211580 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.211710 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.212295 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.217837 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.218270 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.218654 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.219620 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.219667 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.227990 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.233608 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5clxf\" (UniqueName: \"kubernetes.io/projected/ac636140-8b68-474a-a7f9-7d46e6a22de0-kube-api-access-5clxf\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.256470 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.817997 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v"] Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.991579 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:04:51 crc kubenswrapper[4751]: E0130 22:04:51.992130 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:04:52 crc kubenswrapper[4751]: I0130 22:04:52.847930 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" event={"ID":"ac636140-8b68-474a-a7f9-7d46e6a22de0","Type":"ContainerStarted","Data":"d5854d35428dbe82b166fa29bb387cba4ba66dda4f1e4fe2b120d899a0114692"} Jan 30 22:04:52 crc kubenswrapper[4751]: I0130 22:04:52.848302 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" event={"ID":"ac636140-8b68-474a-a7f9-7d46e6a22de0","Type":"ContainerStarted","Data":"332f4559336eaf6a29c2a93a27c6b1335414d93c7cfb9ce025b56c0626f75947"} Jan 30 22:04:52 crc kubenswrapper[4751]: I0130 22:04:52.877453 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" podStartSLOduration=2.4155313290000002 podStartE2EDuration="2.877430022s" podCreationTimestamp="2026-01-30 22:04:50 +0000 UTC" firstStartedPulling="2026-01-30 22:04:51.829498293 +0000 UTC m=+3030.575320942" lastFinishedPulling="2026-01-30 22:04:52.291396986 +0000 UTC m=+3031.037219635" observedRunningTime="2026-01-30 22:04:52.871953495 +0000 UTC m=+3031.617776164" watchObservedRunningTime="2026-01-30 22:04:52.877430022 +0000 UTC m=+3031.623252671" Jan 30 22:05:04 crc kubenswrapper[4751]: I0130 22:05:04.976082 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:05:04 crc kubenswrapper[4751]: E0130 22:05:04.976992 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:05:18 crc kubenswrapper[4751]: I0130 22:05:18.976699 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:05:18 crc kubenswrapper[4751]: E0130 22:05:18.977717 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:05:32 crc kubenswrapper[4751]: I0130 22:05:32.976389 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:05:32 crc kubenswrapper[4751]: E0130 22:05:32.977481 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:05:44 crc kubenswrapper[4751]: I0130 22:05:44.976112 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:05:44 crc kubenswrapper[4751]: E0130 22:05:44.977208 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:05:55 crc kubenswrapper[4751]: I0130 22:05:55.975910 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:05:55 crc kubenswrapper[4751]: E0130 22:05:55.978255 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:06:06 crc kubenswrapper[4751]: I0130 22:06:06.977285 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:06:06 crc kubenswrapper[4751]: E0130 22:06:06.978626 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:06:19 crc kubenswrapper[4751]: I0130 22:06:19.976068 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:06:19 crc kubenswrapper[4751]: E0130 22:06:19.977634 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:06:27 crc kubenswrapper[4751]: I0130 22:06:27.903679 4751 scope.go:117] "RemoveContainer" containerID="145648500fb4fe24047f9789895bde02ee47ed1fcc6d67993ff7dc9ab1a1638c" Jan 30 22:06:27 crc kubenswrapper[4751]: I0130 22:06:27.931146 4751 scope.go:117] "RemoveContainer" containerID="4dcaa711832a9bcefff451d85870a1e1c9f1f1df5c264b8880f8f7854b2f6a5e" Jan 30 22:06:27 crc kubenswrapper[4751]: I0130 22:06:27.992099 4751 scope.go:117] "RemoveContainer" containerID="ff119fbbce42b05dc2414135ad63839e511e523a3e25e47a1bb53a0ee43eb1ea" Jan 30 22:06:33 crc kubenswrapper[4751]: I0130 22:06:33.975624 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:06:33 crc kubenswrapper[4751]: E0130 22:06:33.977689 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:06:36 crc kubenswrapper[4751]: I0130 22:06:36.042396 4751 generic.go:334] "Generic (PLEG): container finished" podID="ac636140-8b68-474a-a7f9-7d46e6a22de0" containerID="d5854d35428dbe82b166fa29bb387cba4ba66dda4f1e4fe2b120d899a0114692" exitCode=0 Jan 30 22:06:36 crc kubenswrapper[4751]: I0130 22:06:36.042621 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" event={"ID":"ac636140-8b68-474a-a7f9-7d46e6a22de0","Type":"ContainerDied","Data":"d5854d35428dbe82b166fa29bb387cba4ba66dda4f1e4fe2b120d899a0114692"} Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.557453 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.661104 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-2\") pod \"ac636140-8b68-474a-a7f9-7d46e6a22de0\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.661340 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-0\") pod \"ac636140-8b68-474a-a7f9-7d46e6a22de0\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.661517 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ssh-key-openstack-edpm-ipam\") pod \"ac636140-8b68-474a-a7f9-7d46e6a22de0\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.661573 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5clxf\" (UniqueName: \"kubernetes.io/projected/ac636140-8b68-474a-a7f9-7d46e6a22de0-kube-api-access-5clxf\") pod \"ac636140-8b68-474a-a7f9-7d46e6a22de0\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.661786 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-1\") pod \"ac636140-8b68-474a-a7f9-7d46e6a22de0\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.662491 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-telemetry-power-monitoring-combined-ca-bundle\") pod \"ac636140-8b68-474a-a7f9-7d46e6a22de0\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.662583 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-inventory\") pod \"ac636140-8b68-474a-a7f9-7d46e6a22de0\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.669697 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "ac636140-8b68-474a-a7f9-7d46e6a22de0" (UID: "ac636140-8b68-474a-a7f9-7d46e6a22de0"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.673415 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac636140-8b68-474a-a7f9-7d46e6a22de0-kube-api-access-5clxf" (OuterVolumeSpecName: "kube-api-access-5clxf") pod "ac636140-8b68-474a-a7f9-7d46e6a22de0" (UID: "ac636140-8b68-474a-a7f9-7d46e6a22de0"). InnerVolumeSpecName "kube-api-access-5clxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.698140 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "ac636140-8b68-474a-a7f9-7d46e6a22de0" (UID: "ac636140-8b68-474a-a7f9-7d46e6a22de0"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.699832 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "ac636140-8b68-474a-a7f9-7d46e6a22de0" (UID: "ac636140-8b68-474a-a7f9-7d46e6a22de0"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.708376 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-inventory" (OuterVolumeSpecName: "inventory") pod "ac636140-8b68-474a-a7f9-7d46e6a22de0" (UID: "ac636140-8b68-474a-a7f9-7d46e6a22de0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.708495 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "ac636140-8b68-474a-a7f9-7d46e6a22de0" (UID: "ac636140-8b68-474a-a7f9-7d46e6a22de0"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.709700 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ac636140-8b68-474a-a7f9-7d46e6a22de0" (UID: "ac636140-8b68-474a-a7f9-7d46e6a22de0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.765870 4751 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.765905 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.765916 4751 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.765930 4751 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.765939 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.765949 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5clxf\" (UniqueName: \"kubernetes.io/projected/ac636140-8b68-474a-a7f9-7d46e6a22de0-kube-api-access-5clxf\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.765957 4751 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.066642 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" event={"ID":"ac636140-8b68-474a-a7f9-7d46e6a22de0","Type":"ContainerDied","Data":"332f4559336eaf6a29c2a93a27c6b1335414d93c7cfb9ce025b56c0626f75947"} Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.066967 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="332f4559336eaf6a29c2a93a27c6b1335414d93c7cfb9ce025b56c0626f75947" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.066699 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.227887 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm"] Jan 30 22:06:38 crc kubenswrapper[4751]: E0130 22:06:38.243011 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac636140-8b68-474a-a7f9-7d46e6a22de0" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.243185 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac636140-8b68-474a-a7f9-7d46e6a22de0" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.247368 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac636140-8b68-474a-a7f9-7d46e6a22de0" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.249249 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.254378 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.254581 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.254857 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.255054 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.255347 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.275726 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm"] Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.278059 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.278578 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.278692 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzrs9\" (UniqueName: \"kubernetes.io/projected/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-kube-api-access-fzrs9\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.278923 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.278978 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.382996 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.383077 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzrs9\" (UniqueName: \"kubernetes.io/projected/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-kube-api-access-fzrs9\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.383174 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.383213 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.383305 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.387027 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.387599 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.387882 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.388292 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.402731 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzrs9\" (UniqueName: \"kubernetes.io/projected/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-kube-api-access-fzrs9\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.581208 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:39 crc kubenswrapper[4751]: I0130 22:06:39.132494 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:06:39 crc kubenswrapper[4751]: I0130 22:06:39.134311 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm"] Jan 30 22:06:40 crc kubenswrapper[4751]: I0130 22:06:40.091225 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" event={"ID":"61149618-7cc3-4dd6-b61a-0fb8226f2cc1","Type":"ContainerStarted","Data":"91af7206bc59e56c511a5bb1490384ea5d6919d8e986e93a29d06a5b971b37d4"} Jan 30 22:06:40 crc kubenswrapper[4751]: I0130 22:06:40.091524 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" event={"ID":"61149618-7cc3-4dd6-b61a-0fb8226f2cc1","Type":"ContainerStarted","Data":"92dbf94d199fec6f4332eccc79025d40e821e34facf432cb7e9915a997164fe6"} Jan 30 22:06:40 crc kubenswrapper[4751]: I0130 22:06:40.120184 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" podStartSLOduration=1.719145718 podStartE2EDuration="2.120161691s" podCreationTimestamp="2026-01-30 22:06:38 +0000 UTC" firstStartedPulling="2026-01-30 22:06:39.132210908 +0000 UTC m=+3137.878033557" lastFinishedPulling="2026-01-30 22:06:39.533226881 +0000 UTC m=+3138.279049530" observedRunningTime="2026-01-30 22:06:40.111545959 +0000 UTC m=+3138.857368618" watchObservedRunningTime="2026-01-30 22:06:40.120161691 +0000 UTC m=+3138.865984340" Jan 30 22:06:47 crc kubenswrapper[4751]: I0130 22:06:47.983188 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:06:47 crc kubenswrapper[4751]: E0130 22:06:47.983995 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:06:54 crc kubenswrapper[4751]: I0130 22:06:54.261973 4751 generic.go:334] "Generic (PLEG): container finished" podID="61149618-7cc3-4dd6-b61a-0fb8226f2cc1" containerID="91af7206bc59e56c511a5bb1490384ea5d6919d8e986e93a29d06a5b971b37d4" exitCode=0 Jan 30 22:06:54 crc kubenswrapper[4751]: I0130 22:06:54.262047 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" event={"ID":"61149618-7cc3-4dd6-b61a-0fb8226f2cc1","Type":"ContainerDied","Data":"91af7206bc59e56c511a5bb1490384ea5d6919d8e986e93a29d06a5b971b37d4"} Jan 30 22:06:55 crc kubenswrapper[4751]: I0130 22:06:55.775621 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:55 crc kubenswrapper[4751]: I0130 22:06:55.920790 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-logging-compute-config-data-1\") pod \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " Jan 30 22:06:55 crc kubenswrapper[4751]: I0130 22:06:55.921025 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzrs9\" (UniqueName: \"kubernetes.io/projected/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-kube-api-access-fzrs9\") pod \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " Jan 30 22:06:55 crc kubenswrapper[4751]: I0130 22:06:55.921060 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-ssh-key-openstack-edpm-ipam\") pod \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " Jan 30 22:06:55 crc kubenswrapper[4751]: I0130 22:06:55.921232 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-logging-compute-config-data-0\") pod \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " Jan 30 22:06:55 crc kubenswrapper[4751]: I0130 22:06:55.921264 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-inventory\") pod \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " Jan 30 22:06:55 crc kubenswrapper[4751]: I0130 22:06:55.926852 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-kube-api-access-fzrs9" (OuterVolumeSpecName: "kube-api-access-fzrs9") pod "61149618-7cc3-4dd6-b61a-0fb8226f2cc1" (UID: "61149618-7cc3-4dd6-b61a-0fb8226f2cc1"). InnerVolumeSpecName "kube-api-access-fzrs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:55 crc kubenswrapper[4751]: I0130 22:06:55.960766 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "61149618-7cc3-4dd6-b61a-0fb8226f2cc1" (UID: "61149618-7cc3-4dd6-b61a-0fb8226f2cc1"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:55 crc kubenswrapper[4751]: I0130 22:06:55.973138 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-inventory" (OuterVolumeSpecName: "inventory") pod "61149618-7cc3-4dd6-b61a-0fb8226f2cc1" (UID: "61149618-7cc3-4dd6-b61a-0fb8226f2cc1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:55 crc kubenswrapper[4751]: I0130 22:06:55.974441 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "61149618-7cc3-4dd6-b61a-0fb8226f2cc1" (UID: "61149618-7cc3-4dd6-b61a-0fb8226f2cc1"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:55 crc kubenswrapper[4751]: I0130 22:06:55.985302 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "61149618-7cc3-4dd6-b61a-0fb8226f2cc1" (UID: "61149618-7cc3-4dd6-b61a-0fb8226f2cc1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:56 crc kubenswrapper[4751]: I0130 22:06:56.023913 4751 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:56 crc kubenswrapper[4751]: I0130 22:06:56.023947 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzrs9\" (UniqueName: \"kubernetes.io/projected/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-kube-api-access-fzrs9\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:56 crc kubenswrapper[4751]: I0130 22:06:56.023956 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:56 crc kubenswrapper[4751]: I0130 22:06:56.023967 4751 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:56 crc kubenswrapper[4751]: I0130 22:06:56.023979 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:56 crc kubenswrapper[4751]: I0130 22:06:56.283403 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" event={"ID":"61149618-7cc3-4dd6-b61a-0fb8226f2cc1","Type":"ContainerDied","Data":"92dbf94d199fec6f4332eccc79025d40e821e34facf432cb7e9915a997164fe6"} Jan 30 22:06:56 crc kubenswrapper[4751]: I0130 22:06:56.283448 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92dbf94d199fec6f4332eccc79025d40e821e34facf432cb7e9915a997164fe6" Jan 30 22:06:56 crc kubenswrapper[4751]: I0130 22:06:56.283501 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:07:01 crc kubenswrapper[4751]: I0130 22:07:01.985797 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:07:01 crc kubenswrapper[4751]: E0130 22:07:01.986594 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:07:13 crc kubenswrapper[4751]: I0130 22:07:13.976124 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:07:13 crc kubenswrapper[4751]: E0130 22:07:13.976909 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:07:24 crc kubenswrapper[4751]: I0130 22:07:24.977091 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:07:25 crc kubenswrapper[4751]: I0130 22:07:25.578792 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"b7204a414860b1d9f7ebaccba0c3c85f4ccaeeed68090f146baeabd5dcaab619"} Jan 30 22:07:44 crc kubenswrapper[4751]: E0130 22:07:44.043270 4751 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.39:46226->38.102.83.39:41127: write tcp 38.102.83.39:46226->38.102.83.39:41127: write: broken pipe Jan 30 22:09:54 crc kubenswrapper[4751]: I0130 22:09:54.126364 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:09:54 crc kubenswrapper[4751]: I0130 22:09:54.128011 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:10:24 crc kubenswrapper[4751]: I0130 22:10:24.126341 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:10:24 crc kubenswrapper[4751]: I0130 22:10:24.126938 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:10:54 crc kubenswrapper[4751]: I0130 22:10:54.127394 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:10:54 crc kubenswrapper[4751]: I0130 22:10:54.128014 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:10:54 crc kubenswrapper[4751]: I0130 22:10:54.128073 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 22:10:54 crc kubenswrapper[4751]: I0130 22:10:54.129026 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b7204a414860b1d9f7ebaccba0c3c85f4ccaeeed68090f146baeabd5dcaab619"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:10:54 crc kubenswrapper[4751]: I0130 22:10:54.129087 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://b7204a414860b1d9f7ebaccba0c3c85f4ccaeeed68090f146baeabd5dcaab619" gracePeriod=600 Jan 30 22:10:54 crc kubenswrapper[4751]: I0130 22:10:54.768841 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="b7204a414860b1d9f7ebaccba0c3c85f4ccaeeed68090f146baeabd5dcaab619" exitCode=0 Jan 30 22:10:54 crc kubenswrapper[4751]: I0130 22:10:54.769268 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"b7204a414860b1d9f7ebaccba0c3c85f4ccaeeed68090f146baeabd5dcaab619"} Jan 30 22:10:54 crc kubenswrapper[4751]: I0130 22:10:54.769293 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a"} Jan 30 22:10:54 crc kubenswrapper[4751]: I0130 22:10:54.769309 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:11:02 crc kubenswrapper[4751]: E0130 22:11:02.773358 4751 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.39:38106->38.102.83.39:41127: write tcp 38.102.83.39:38106->38.102.83.39:41127: write: broken pipe Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.706040 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g256h"] Jan 30 22:11:34 crc kubenswrapper[4751]: E0130 22:11:34.707113 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61149618-7cc3-4dd6-b61a-0fb8226f2cc1" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.707128 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="61149618-7cc3-4dd6-b61a-0fb8226f2cc1" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.707373 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="61149618-7cc3-4dd6-b61a-0fb8226f2cc1" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.709003 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.723369 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g256h"] Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.836300 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a54efd8f-acd9-4019-8a15-da81fc80ad4d-utilities\") pod \"certified-operators-g256h\" (UID: \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\") " pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.836720 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwbqg\" (UniqueName: \"kubernetes.io/projected/a54efd8f-acd9-4019-8a15-da81fc80ad4d-kube-api-access-pwbqg\") pod \"certified-operators-g256h\" (UID: \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\") " pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.836898 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a54efd8f-acd9-4019-8a15-da81fc80ad4d-catalog-content\") pod \"certified-operators-g256h\" (UID: \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\") " pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.939628 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwbqg\" (UniqueName: \"kubernetes.io/projected/a54efd8f-acd9-4019-8a15-da81fc80ad4d-kube-api-access-pwbqg\") pod \"certified-operators-g256h\" (UID: \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\") " pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.939771 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a54efd8f-acd9-4019-8a15-da81fc80ad4d-catalog-content\") pod \"certified-operators-g256h\" (UID: \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\") " pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.940021 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a54efd8f-acd9-4019-8a15-da81fc80ad4d-utilities\") pod \"certified-operators-g256h\" (UID: \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\") " pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.940793 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a54efd8f-acd9-4019-8a15-da81fc80ad4d-catalog-content\") pod \"certified-operators-g256h\" (UID: \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\") " pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.940834 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a54efd8f-acd9-4019-8a15-da81fc80ad4d-utilities\") pod \"certified-operators-g256h\" (UID: \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\") " pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.960941 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwbqg\" (UniqueName: \"kubernetes.io/projected/a54efd8f-acd9-4019-8a15-da81fc80ad4d-kube-api-access-pwbqg\") pod \"certified-operators-g256h\" (UID: \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\") " pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:35 crc kubenswrapper[4751]: I0130 22:11:35.029036 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:35 crc kubenswrapper[4751]: I0130 22:11:35.737111 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g256h"] Jan 30 22:11:36 crc kubenswrapper[4751]: I0130 22:11:36.222604 4751 generic.go:334] "Generic (PLEG): container finished" podID="a54efd8f-acd9-4019-8a15-da81fc80ad4d" containerID="3209c9707b371fe08c3442ba8b1fa21267d43f3d5b2980ea89cc264d338ce17a" exitCode=0 Jan 30 22:11:36 crc kubenswrapper[4751]: I0130 22:11:36.222662 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g256h" event={"ID":"a54efd8f-acd9-4019-8a15-da81fc80ad4d","Type":"ContainerDied","Data":"3209c9707b371fe08c3442ba8b1fa21267d43f3d5b2980ea89cc264d338ce17a"} Jan 30 22:11:36 crc kubenswrapper[4751]: I0130 22:11:36.222942 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g256h" event={"ID":"a54efd8f-acd9-4019-8a15-da81fc80ad4d","Type":"ContainerStarted","Data":"1b7078cb60a3b033349673f6ec5cf0c8630bd6c1147e95ae4b0ffca0c14c2e81"} Jan 30 22:11:37 crc kubenswrapper[4751]: I0130 22:11:37.236285 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g256h" event={"ID":"a54efd8f-acd9-4019-8a15-da81fc80ad4d","Type":"ContainerStarted","Data":"ef501f5db7cadcd6c70e26bf455fe0d9e50ec670435f9177a573a6b51013646b"} Jan 30 22:11:39 crc kubenswrapper[4751]: I0130 22:11:39.257258 4751 generic.go:334] "Generic (PLEG): container finished" podID="a54efd8f-acd9-4019-8a15-da81fc80ad4d" containerID="ef501f5db7cadcd6c70e26bf455fe0d9e50ec670435f9177a573a6b51013646b" exitCode=0 Jan 30 22:11:39 crc kubenswrapper[4751]: I0130 22:11:39.257393 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g256h" event={"ID":"a54efd8f-acd9-4019-8a15-da81fc80ad4d","Type":"ContainerDied","Data":"ef501f5db7cadcd6c70e26bf455fe0d9e50ec670435f9177a573a6b51013646b"} Jan 30 22:11:39 crc kubenswrapper[4751]: I0130 22:11:39.261430 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:11:40 crc kubenswrapper[4751]: I0130 22:11:40.269142 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g256h" event={"ID":"a54efd8f-acd9-4019-8a15-da81fc80ad4d","Type":"ContainerStarted","Data":"bf2e550fe8c5e221a4d3c4173acd789c659c97506bb2bfc263148c7ca7985d5e"} Jan 30 22:11:40 crc kubenswrapper[4751]: I0130 22:11:40.302715 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g256h" podStartSLOduration=2.750842593 podStartE2EDuration="6.302692564s" podCreationTimestamp="2026-01-30 22:11:34 +0000 UTC" firstStartedPulling="2026-01-30 22:11:36.224922597 +0000 UTC m=+3434.970745246" lastFinishedPulling="2026-01-30 22:11:39.776772568 +0000 UTC m=+3438.522595217" observedRunningTime="2026-01-30 22:11:40.290003383 +0000 UTC m=+3439.035826032" watchObservedRunningTime="2026-01-30 22:11:40.302692564 +0000 UTC m=+3439.048515213" Jan 30 22:11:45 crc kubenswrapper[4751]: I0130 22:11:45.029389 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:45 crc kubenswrapper[4751]: I0130 22:11:45.030085 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:45 crc kubenswrapper[4751]: I0130 22:11:45.084875 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:45 crc kubenswrapper[4751]: I0130 22:11:45.371928 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:45 crc kubenswrapper[4751]: I0130 22:11:45.424005 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g256h"] Jan 30 22:11:47 crc kubenswrapper[4751]: I0130 22:11:47.339384 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g256h" podUID="a54efd8f-acd9-4019-8a15-da81fc80ad4d" containerName="registry-server" containerID="cri-o://bf2e550fe8c5e221a4d3c4173acd789c659c97506bb2bfc263148c7ca7985d5e" gracePeriod=2 Jan 30 22:11:47 crc kubenswrapper[4751]: I0130 22:11:47.743315 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gtfms"] Jan 30 22:11:47 crc kubenswrapper[4751]: I0130 22:11:47.748310 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:11:47 crc kubenswrapper[4751]: I0130 22:11:47.778189 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gtfms"] Jan 30 22:11:47 crc kubenswrapper[4751]: I0130 22:11:47.846566 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5e3f459-601d-4d72-a9c9-8113f86749e6-catalog-content\") pod \"redhat-operators-gtfms\" (UID: \"e5e3f459-601d-4d72-a9c9-8113f86749e6\") " pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:11:47 crc kubenswrapper[4751]: I0130 22:11:47.846650 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqhc8\" (UniqueName: \"kubernetes.io/projected/e5e3f459-601d-4d72-a9c9-8113f86749e6-kube-api-access-mqhc8\") pod \"redhat-operators-gtfms\" (UID: \"e5e3f459-601d-4d72-a9c9-8113f86749e6\") " pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:11:47 crc kubenswrapper[4751]: I0130 22:11:47.846765 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5e3f459-601d-4d72-a9c9-8113f86749e6-utilities\") pod \"redhat-operators-gtfms\" (UID: \"e5e3f459-601d-4d72-a9c9-8113f86749e6\") " pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:11:47 crc kubenswrapper[4751]: I0130 22:11:47.949190 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5e3f459-601d-4d72-a9c9-8113f86749e6-catalog-content\") pod \"redhat-operators-gtfms\" (UID: \"e5e3f459-601d-4d72-a9c9-8113f86749e6\") " pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:11:47 crc kubenswrapper[4751]: I0130 22:11:47.949297 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqhc8\" (UniqueName: \"kubernetes.io/projected/e5e3f459-601d-4d72-a9c9-8113f86749e6-kube-api-access-mqhc8\") pod \"redhat-operators-gtfms\" (UID: \"e5e3f459-601d-4d72-a9c9-8113f86749e6\") " pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:11:47 crc kubenswrapper[4751]: I0130 22:11:47.949483 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5e3f459-601d-4d72-a9c9-8113f86749e6-utilities\") pod \"redhat-operators-gtfms\" (UID: \"e5e3f459-601d-4d72-a9c9-8113f86749e6\") " pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:11:47 crc kubenswrapper[4751]: I0130 22:11:47.950107 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5e3f459-601d-4d72-a9c9-8113f86749e6-catalog-content\") pod \"redhat-operators-gtfms\" (UID: \"e5e3f459-601d-4d72-a9c9-8113f86749e6\") " pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:11:47 crc kubenswrapper[4751]: I0130 22:11:47.953688 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5e3f459-601d-4d72-a9c9-8113f86749e6-utilities\") pod \"redhat-operators-gtfms\" (UID: \"e5e3f459-601d-4d72-a9c9-8113f86749e6\") " pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:11:47 crc kubenswrapper[4751]: I0130 22:11:47.971671 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqhc8\" (UniqueName: \"kubernetes.io/projected/e5e3f459-601d-4d72-a9c9-8113f86749e6-kube-api-access-mqhc8\") pod \"redhat-operators-gtfms\" (UID: \"e5e3f459-601d-4d72-a9c9-8113f86749e6\") " pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.069237 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.091280 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.270759 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwbqg\" (UniqueName: \"kubernetes.io/projected/a54efd8f-acd9-4019-8a15-da81fc80ad4d-kube-api-access-pwbqg\") pod \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\" (UID: \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\") " Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.271118 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a54efd8f-acd9-4019-8a15-da81fc80ad4d-catalog-content\") pod \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\" (UID: \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\") " Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.271156 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a54efd8f-acd9-4019-8a15-da81fc80ad4d-utilities\") pod \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\" (UID: \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\") " Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.272080 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a54efd8f-acd9-4019-8a15-da81fc80ad4d-utilities" (OuterVolumeSpecName: "utilities") pod "a54efd8f-acd9-4019-8a15-da81fc80ad4d" (UID: "a54efd8f-acd9-4019-8a15-da81fc80ad4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.276457 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a54efd8f-acd9-4019-8a15-da81fc80ad4d-kube-api-access-pwbqg" (OuterVolumeSpecName: "kube-api-access-pwbqg") pod "a54efd8f-acd9-4019-8a15-da81fc80ad4d" (UID: "a54efd8f-acd9-4019-8a15-da81fc80ad4d"). InnerVolumeSpecName "kube-api-access-pwbqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.337357 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a54efd8f-acd9-4019-8a15-da81fc80ad4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a54efd8f-acd9-4019-8a15-da81fc80ad4d" (UID: "a54efd8f-acd9-4019-8a15-da81fc80ad4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.374224 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwbqg\" (UniqueName: \"kubernetes.io/projected/a54efd8f-acd9-4019-8a15-da81fc80ad4d-kube-api-access-pwbqg\") on node \"crc\" DevicePath \"\"" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.374254 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a54efd8f-acd9-4019-8a15-da81fc80ad4d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.374264 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a54efd8f-acd9-4019-8a15-da81fc80ad4d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.376835 4751 generic.go:334] "Generic (PLEG): container finished" podID="a54efd8f-acd9-4019-8a15-da81fc80ad4d" containerID="bf2e550fe8c5e221a4d3c4173acd789c659c97506bb2bfc263148c7ca7985d5e" exitCode=0 Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.376871 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g256h" event={"ID":"a54efd8f-acd9-4019-8a15-da81fc80ad4d","Type":"ContainerDied","Data":"bf2e550fe8c5e221a4d3c4173acd789c659c97506bb2bfc263148c7ca7985d5e"} Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.376897 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g256h" event={"ID":"a54efd8f-acd9-4019-8a15-da81fc80ad4d","Type":"ContainerDied","Data":"1b7078cb60a3b033349673f6ec5cf0c8630bd6c1147e95ae4b0ffca0c14c2e81"} Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.376916 4751 scope.go:117] "RemoveContainer" containerID="bf2e550fe8c5e221a4d3c4173acd789c659c97506bb2bfc263148c7ca7985d5e" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.377069 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.441513 4751 scope.go:117] "RemoveContainer" containerID="ef501f5db7cadcd6c70e26bf455fe0d9e50ec670435f9177a573a6b51013646b" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.454258 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g256h"] Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.469042 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g256h"] Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.522822 4751 scope.go:117] "RemoveContainer" containerID="3209c9707b371fe08c3442ba8b1fa21267d43f3d5b2980ea89cc264d338ce17a" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.580485 4751 scope.go:117] "RemoveContainer" containerID="bf2e550fe8c5e221a4d3c4173acd789c659c97506bb2bfc263148c7ca7985d5e" Jan 30 22:11:48 crc kubenswrapper[4751]: E0130 22:11:48.581451 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf2e550fe8c5e221a4d3c4173acd789c659c97506bb2bfc263148c7ca7985d5e\": container with ID starting with bf2e550fe8c5e221a4d3c4173acd789c659c97506bb2bfc263148c7ca7985d5e not found: ID does not exist" containerID="bf2e550fe8c5e221a4d3c4173acd789c659c97506bb2bfc263148c7ca7985d5e" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.581492 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf2e550fe8c5e221a4d3c4173acd789c659c97506bb2bfc263148c7ca7985d5e"} err="failed to get container status \"bf2e550fe8c5e221a4d3c4173acd789c659c97506bb2bfc263148c7ca7985d5e\": rpc error: code = NotFound desc = could not find container \"bf2e550fe8c5e221a4d3c4173acd789c659c97506bb2bfc263148c7ca7985d5e\": container with ID starting with bf2e550fe8c5e221a4d3c4173acd789c659c97506bb2bfc263148c7ca7985d5e not found: ID does not exist" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.581518 4751 scope.go:117] "RemoveContainer" containerID="ef501f5db7cadcd6c70e26bf455fe0d9e50ec670435f9177a573a6b51013646b" Jan 30 22:11:48 crc kubenswrapper[4751]: E0130 22:11:48.581774 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef501f5db7cadcd6c70e26bf455fe0d9e50ec670435f9177a573a6b51013646b\": container with ID starting with ef501f5db7cadcd6c70e26bf455fe0d9e50ec670435f9177a573a6b51013646b not found: ID does not exist" containerID="ef501f5db7cadcd6c70e26bf455fe0d9e50ec670435f9177a573a6b51013646b" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.581796 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef501f5db7cadcd6c70e26bf455fe0d9e50ec670435f9177a573a6b51013646b"} err="failed to get container status \"ef501f5db7cadcd6c70e26bf455fe0d9e50ec670435f9177a573a6b51013646b\": rpc error: code = NotFound desc = could not find container \"ef501f5db7cadcd6c70e26bf455fe0d9e50ec670435f9177a573a6b51013646b\": container with ID starting with ef501f5db7cadcd6c70e26bf455fe0d9e50ec670435f9177a573a6b51013646b not found: ID does not exist" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.581811 4751 scope.go:117] "RemoveContainer" containerID="3209c9707b371fe08c3442ba8b1fa21267d43f3d5b2980ea89cc264d338ce17a" Jan 30 22:11:48 crc kubenswrapper[4751]: E0130 22:11:48.583410 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3209c9707b371fe08c3442ba8b1fa21267d43f3d5b2980ea89cc264d338ce17a\": container with ID starting with 3209c9707b371fe08c3442ba8b1fa21267d43f3d5b2980ea89cc264d338ce17a not found: ID does not exist" containerID="3209c9707b371fe08c3442ba8b1fa21267d43f3d5b2980ea89cc264d338ce17a" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.583459 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3209c9707b371fe08c3442ba8b1fa21267d43f3d5b2980ea89cc264d338ce17a"} err="failed to get container status \"3209c9707b371fe08c3442ba8b1fa21267d43f3d5b2980ea89cc264d338ce17a\": rpc error: code = NotFound desc = could not find container \"3209c9707b371fe08c3442ba8b1fa21267d43f3d5b2980ea89cc264d338ce17a\": container with ID starting with 3209c9707b371fe08c3442ba8b1fa21267d43f3d5b2980ea89cc264d338ce17a not found: ID does not exist" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.766646 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gtfms"] Jan 30 22:11:49 crc kubenswrapper[4751]: I0130 22:11:49.390527 4751 generic.go:334] "Generic (PLEG): container finished" podID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerID="f62194cd5691a9a1ce6156c18dd4fba7f56689c9bbabf66bc559093a26550610" exitCode=0 Jan 30 22:11:49 crc kubenswrapper[4751]: I0130 22:11:49.390650 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtfms" event={"ID":"e5e3f459-601d-4d72-a9c9-8113f86749e6","Type":"ContainerDied","Data":"f62194cd5691a9a1ce6156c18dd4fba7f56689c9bbabf66bc559093a26550610"} Jan 30 22:11:49 crc kubenswrapper[4751]: I0130 22:11:49.390844 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtfms" event={"ID":"e5e3f459-601d-4d72-a9c9-8113f86749e6","Type":"ContainerStarted","Data":"8454d78d5588531b7a20a01473bd459440e4383a613aff3fdd243ec56ac18a03"} Jan 30 22:11:49 crc kubenswrapper[4751]: I0130 22:11:49.989803 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a54efd8f-acd9-4019-8a15-da81fc80ad4d" path="/var/lib/kubelet/pods/a54efd8f-acd9-4019-8a15-da81fc80ad4d/volumes" Jan 30 22:11:50 crc kubenswrapper[4751]: I0130 22:11:50.407307 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtfms" event={"ID":"e5e3f459-601d-4d72-a9c9-8113f86749e6","Type":"ContainerStarted","Data":"86d3814e4b789663793e35385bd4da11af9e9bb619d1749d6030676a27c663d2"} Jan 30 22:11:55 crc kubenswrapper[4751]: I0130 22:11:55.460086 4751 generic.go:334] "Generic (PLEG): container finished" podID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerID="86d3814e4b789663793e35385bd4da11af9e9bb619d1749d6030676a27c663d2" exitCode=0 Jan 30 22:11:55 crc kubenswrapper[4751]: I0130 22:11:55.460606 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtfms" event={"ID":"e5e3f459-601d-4d72-a9c9-8113f86749e6","Type":"ContainerDied","Data":"86d3814e4b789663793e35385bd4da11af9e9bb619d1749d6030676a27c663d2"} Jan 30 22:11:56 crc kubenswrapper[4751]: I0130 22:11:56.475610 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtfms" event={"ID":"e5e3f459-601d-4d72-a9c9-8113f86749e6","Type":"ContainerStarted","Data":"a11579f08b384e0a99ab401169f8b2dccf25446e9b6bf16ee96ca3016dcf2e69"} Jan 30 22:11:56 crc kubenswrapper[4751]: I0130 22:11:56.504105 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gtfms" podStartSLOduration=3.068434338 podStartE2EDuration="9.504086103s" podCreationTimestamp="2026-01-30 22:11:47 +0000 UTC" firstStartedPulling="2026-01-30 22:11:49.394036081 +0000 UTC m=+3448.139858730" lastFinishedPulling="2026-01-30 22:11:55.829687846 +0000 UTC m=+3454.575510495" observedRunningTime="2026-01-30 22:11:56.497435613 +0000 UTC m=+3455.243258272" watchObservedRunningTime="2026-01-30 22:11:56.504086103 +0000 UTC m=+3455.249908752" Jan 30 22:11:58 crc kubenswrapper[4751]: I0130 22:11:58.070283 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:11:58 crc kubenswrapper[4751]: I0130 22:11:58.070616 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:11:59 crc kubenswrapper[4751]: I0130 22:11:59.119141 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gtfms" podUID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerName="registry-server" probeResult="failure" output=< Jan 30 22:11:59 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:11:59 crc kubenswrapper[4751]: > Jan 30 22:12:09 crc kubenswrapper[4751]: I0130 22:12:09.121575 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gtfms" podUID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerName="registry-server" probeResult="failure" output=< Jan 30 22:12:09 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:12:09 crc kubenswrapper[4751]: > Jan 30 22:12:19 crc kubenswrapper[4751]: I0130 22:12:19.135088 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gtfms" podUID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerName="registry-server" probeResult="failure" output=< Jan 30 22:12:19 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:12:19 crc kubenswrapper[4751]: > Jan 30 22:12:28 crc kubenswrapper[4751]: I0130 22:12:28.135089 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:12:28 crc kubenswrapper[4751]: I0130 22:12:28.188588 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:12:28 crc kubenswrapper[4751]: I0130 22:12:28.381965 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gtfms"] Jan 30 22:12:29 crc kubenswrapper[4751]: I0130 22:12:29.835031 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gtfms" podUID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerName="registry-server" containerID="cri-o://a11579f08b384e0a99ab401169f8b2dccf25446e9b6bf16ee96ca3016dcf2e69" gracePeriod=2 Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.452760 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.517081 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqhc8\" (UniqueName: \"kubernetes.io/projected/e5e3f459-601d-4d72-a9c9-8113f86749e6-kube-api-access-mqhc8\") pod \"e5e3f459-601d-4d72-a9c9-8113f86749e6\" (UID: \"e5e3f459-601d-4d72-a9c9-8113f86749e6\") " Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.517157 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5e3f459-601d-4d72-a9c9-8113f86749e6-catalog-content\") pod \"e5e3f459-601d-4d72-a9c9-8113f86749e6\" (UID: \"e5e3f459-601d-4d72-a9c9-8113f86749e6\") " Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.517240 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5e3f459-601d-4d72-a9c9-8113f86749e6-utilities\") pod \"e5e3f459-601d-4d72-a9c9-8113f86749e6\" (UID: \"e5e3f459-601d-4d72-a9c9-8113f86749e6\") " Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.518401 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5e3f459-601d-4d72-a9c9-8113f86749e6-utilities" (OuterVolumeSpecName: "utilities") pod "e5e3f459-601d-4d72-a9c9-8113f86749e6" (UID: "e5e3f459-601d-4d72-a9c9-8113f86749e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.522893 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5e3f459-601d-4d72-a9c9-8113f86749e6-kube-api-access-mqhc8" (OuterVolumeSpecName: "kube-api-access-mqhc8") pod "e5e3f459-601d-4d72-a9c9-8113f86749e6" (UID: "e5e3f459-601d-4d72-a9c9-8113f86749e6"). InnerVolumeSpecName "kube-api-access-mqhc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.619282 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqhc8\" (UniqueName: \"kubernetes.io/projected/e5e3f459-601d-4d72-a9c9-8113f86749e6-kube-api-access-mqhc8\") on node \"crc\" DevicePath \"\"" Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.619313 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5e3f459-601d-4d72-a9c9-8113f86749e6-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.644796 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5e3f459-601d-4d72-a9c9-8113f86749e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5e3f459-601d-4d72-a9c9-8113f86749e6" (UID: "e5e3f459-601d-4d72-a9c9-8113f86749e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.721655 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5e3f459-601d-4d72-a9c9-8113f86749e6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.858870 4751 generic.go:334] "Generic (PLEG): container finished" podID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerID="a11579f08b384e0a99ab401169f8b2dccf25446e9b6bf16ee96ca3016dcf2e69" exitCode=0 Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.858922 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtfms" event={"ID":"e5e3f459-601d-4d72-a9c9-8113f86749e6","Type":"ContainerDied","Data":"a11579f08b384e0a99ab401169f8b2dccf25446e9b6bf16ee96ca3016dcf2e69"} Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.858965 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtfms" event={"ID":"e5e3f459-601d-4d72-a9c9-8113f86749e6","Type":"ContainerDied","Data":"8454d78d5588531b7a20a01473bd459440e4383a613aff3fdd243ec56ac18a03"} Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.858989 4751 scope.go:117] "RemoveContainer" containerID="a11579f08b384e0a99ab401169f8b2dccf25446e9b6bf16ee96ca3016dcf2e69" Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.859012 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.897655 4751 scope.go:117] "RemoveContainer" containerID="86d3814e4b789663793e35385bd4da11af9e9bb619d1749d6030676a27c663d2" Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.913357 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gtfms"] Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.933165 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gtfms"] Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.939837 4751 scope.go:117] "RemoveContainer" containerID="f62194cd5691a9a1ce6156c18dd4fba7f56689c9bbabf66bc559093a26550610" Jan 30 22:12:31 crc kubenswrapper[4751]: I0130 22:12:31.003378 4751 scope.go:117] "RemoveContainer" containerID="a11579f08b384e0a99ab401169f8b2dccf25446e9b6bf16ee96ca3016dcf2e69" Jan 30 22:12:31 crc kubenswrapper[4751]: E0130 22:12:31.003925 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a11579f08b384e0a99ab401169f8b2dccf25446e9b6bf16ee96ca3016dcf2e69\": container with ID starting with a11579f08b384e0a99ab401169f8b2dccf25446e9b6bf16ee96ca3016dcf2e69 not found: ID does not exist" containerID="a11579f08b384e0a99ab401169f8b2dccf25446e9b6bf16ee96ca3016dcf2e69" Jan 30 22:12:31 crc kubenswrapper[4751]: I0130 22:12:31.004083 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a11579f08b384e0a99ab401169f8b2dccf25446e9b6bf16ee96ca3016dcf2e69"} err="failed to get container status \"a11579f08b384e0a99ab401169f8b2dccf25446e9b6bf16ee96ca3016dcf2e69\": rpc error: code = NotFound desc = could not find container \"a11579f08b384e0a99ab401169f8b2dccf25446e9b6bf16ee96ca3016dcf2e69\": container with ID starting with a11579f08b384e0a99ab401169f8b2dccf25446e9b6bf16ee96ca3016dcf2e69 not found: ID does not exist" Jan 30 22:12:31 crc kubenswrapper[4751]: I0130 22:12:31.004246 4751 scope.go:117] "RemoveContainer" containerID="86d3814e4b789663793e35385bd4da11af9e9bb619d1749d6030676a27c663d2" Jan 30 22:12:31 crc kubenswrapper[4751]: E0130 22:12:31.004783 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86d3814e4b789663793e35385bd4da11af9e9bb619d1749d6030676a27c663d2\": container with ID starting with 86d3814e4b789663793e35385bd4da11af9e9bb619d1749d6030676a27c663d2 not found: ID does not exist" containerID="86d3814e4b789663793e35385bd4da11af9e9bb619d1749d6030676a27c663d2" Jan 30 22:12:31 crc kubenswrapper[4751]: I0130 22:12:31.004958 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d3814e4b789663793e35385bd4da11af9e9bb619d1749d6030676a27c663d2"} err="failed to get container status \"86d3814e4b789663793e35385bd4da11af9e9bb619d1749d6030676a27c663d2\": rpc error: code = NotFound desc = could not find container \"86d3814e4b789663793e35385bd4da11af9e9bb619d1749d6030676a27c663d2\": container with ID starting with 86d3814e4b789663793e35385bd4da11af9e9bb619d1749d6030676a27c663d2 not found: ID does not exist" Jan 30 22:12:31 crc kubenswrapper[4751]: I0130 22:12:31.005121 4751 scope.go:117] "RemoveContainer" containerID="f62194cd5691a9a1ce6156c18dd4fba7f56689c9bbabf66bc559093a26550610" Jan 30 22:12:31 crc kubenswrapper[4751]: E0130 22:12:31.005568 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f62194cd5691a9a1ce6156c18dd4fba7f56689c9bbabf66bc559093a26550610\": container with ID starting with f62194cd5691a9a1ce6156c18dd4fba7f56689c9bbabf66bc559093a26550610 not found: ID does not exist" containerID="f62194cd5691a9a1ce6156c18dd4fba7f56689c9bbabf66bc559093a26550610" Jan 30 22:12:31 crc kubenswrapper[4751]: I0130 22:12:31.005594 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f62194cd5691a9a1ce6156c18dd4fba7f56689c9bbabf66bc559093a26550610"} err="failed to get container status \"f62194cd5691a9a1ce6156c18dd4fba7f56689c9bbabf66bc559093a26550610\": rpc error: code = NotFound desc = could not find container \"f62194cd5691a9a1ce6156c18dd4fba7f56689c9bbabf66bc559093a26550610\": container with ID starting with f62194cd5691a9a1ce6156c18dd4fba7f56689c9bbabf66bc559093a26550610 not found: ID does not exist" Jan 30 22:12:31 crc kubenswrapper[4751]: I0130 22:12:31.991308 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5e3f459-601d-4d72-a9c9-8113f86749e6" path="/var/lib/kubelet/pods/e5e3f459-601d-4d72-a9c9-8113f86749e6/volumes" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.783105 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wb8v7"] Jan 30 22:12:36 crc kubenswrapper[4751]: E0130 22:12:36.784072 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a54efd8f-acd9-4019-8a15-da81fc80ad4d" containerName="extract-content" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.784086 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a54efd8f-acd9-4019-8a15-da81fc80ad4d" containerName="extract-content" Jan 30 22:12:36 crc kubenswrapper[4751]: E0130 22:12:36.784099 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerName="extract-utilities" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.784105 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerName="extract-utilities" Jan 30 22:12:36 crc kubenswrapper[4751]: E0130 22:12:36.784124 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a54efd8f-acd9-4019-8a15-da81fc80ad4d" containerName="extract-utilities" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.784131 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a54efd8f-acd9-4019-8a15-da81fc80ad4d" containerName="extract-utilities" Jan 30 22:12:36 crc kubenswrapper[4751]: E0130 22:12:36.784172 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerName="extract-content" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.784179 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerName="extract-content" Jan 30 22:12:36 crc kubenswrapper[4751]: E0130 22:12:36.784195 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerName="registry-server" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.784202 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerName="registry-server" Jan 30 22:12:36 crc kubenswrapper[4751]: E0130 22:12:36.784207 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a54efd8f-acd9-4019-8a15-da81fc80ad4d" containerName="registry-server" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.784213 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a54efd8f-acd9-4019-8a15-da81fc80ad4d" containerName="registry-server" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.784435 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a54efd8f-acd9-4019-8a15-da81fc80ad4d" containerName="registry-server" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.784461 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerName="registry-server" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.786432 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.822759 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wb8v7"] Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.872317 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zhm2\" (UniqueName: \"kubernetes.io/projected/b2702e38-6753-43af-9a56-dd00aba1250f-kube-api-access-2zhm2\") pod \"community-operators-wb8v7\" (UID: \"b2702e38-6753-43af-9a56-dd00aba1250f\") " pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.872419 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2702e38-6753-43af-9a56-dd00aba1250f-utilities\") pod \"community-operators-wb8v7\" (UID: \"b2702e38-6753-43af-9a56-dd00aba1250f\") " pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.872477 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2702e38-6753-43af-9a56-dd00aba1250f-catalog-content\") pod \"community-operators-wb8v7\" (UID: \"b2702e38-6753-43af-9a56-dd00aba1250f\") " pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.976306 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2702e38-6753-43af-9a56-dd00aba1250f-utilities\") pod \"community-operators-wb8v7\" (UID: \"b2702e38-6753-43af-9a56-dd00aba1250f\") " pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.976455 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2702e38-6753-43af-9a56-dd00aba1250f-catalog-content\") pod \"community-operators-wb8v7\" (UID: \"b2702e38-6753-43af-9a56-dd00aba1250f\") " pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.976641 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zhm2\" (UniqueName: \"kubernetes.io/projected/b2702e38-6753-43af-9a56-dd00aba1250f-kube-api-access-2zhm2\") pod \"community-operators-wb8v7\" (UID: \"b2702e38-6753-43af-9a56-dd00aba1250f\") " pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.977093 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2702e38-6753-43af-9a56-dd00aba1250f-catalog-content\") pod \"community-operators-wb8v7\" (UID: \"b2702e38-6753-43af-9a56-dd00aba1250f\") " pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.977471 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2702e38-6753-43af-9a56-dd00aba1250f-utilities\") pod \"community-operators-wb8v7\" (UID: \"b2702e38-6753-43af-9a56-dd00aba1250f\") " pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:37 crc kubenswrapper[4751]: I0130 22:12:37.004089 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zhm2\" (UniqueName: \"kubernetes.io/projected/b2702e38-6753-43af-9a56-dd00aba1250f-kube-api-access-2zhm2\") pod \"community-operators-wb8v7\" (UID: \"b2702e38-6753-43af-9a56-dd00aba1250f\") " pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:37 crc kubenswrapper[4751]: I0130 22:12:37.134138 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:37 crc kubenswrapper[4751]: I0130 22:12:37.741378 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wb8v7"] Jan 30 22:12:37 crc kubenswrapper[4751]: I0130 22:12:37.940977 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb8v7" event={"ID":"b2702e38-6753-43af-9a56-dd00aba1250f","Type":"ContainerStarted","Data":"5440b0621535fecd6a06b59ed63d2ee9b603afd691e3da53d98973f03fbc2cc2"} Jan 30 22:12:38 crc kubenswrapper[4751]: I0130 22:12:38.953977 4751 generic.go:334] "Generic (PLEG): container finished" podID="b2702e38-6753-43af-9a56-dd00aba1250f" containerID="91e2d29c10894d3ddeaff53f76d5e7bdb749e03be9cc503526e9d3a57733919d" exitCode=0 Jan 30 22:12:38 crc kubenswrapper[4751]: I0130 22:12:38.954092 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb8v7" event={"ID":"b2702e38-6753-43af-9a56-dd00aba1250f","Type":"ContainerDied","Data":"91e2d29c10894d3ddeaff53f76d5e7bdb749e03be9cc503526e9d3a57733919d"} Jan 30 22:12:39 crc kubenswrapper[4751]: I0130 22:12:39.964873 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb8v7" event={"ID":"b2702e38-6753-43af-9a56-dd00aba1250f","Type":"ContainerStarted","Data":"8358e6de610d29adca602bcccc39fafed818a6d0788975d56bba84a9fd961e4e"} Jan 30 22:12:41 crc kubenswrapper[4751]: I0130 22:12:41.987964 4751 generic.go:334] "Generic (PLEG): container finished" podID="b2702e38-6753-43af-9a56-dd00aba1250f" containerID="8358e6de610d29adca602bcccc39fafed818a6d0788975d56bba84a9fd961e4e" exitCode=0 Jan 30 22:12:41 crc kubenswrapper[4751]: I0130 22:12:41.990799 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb8v7" event={"ID":"b2702e38-6753-43af-9a56-dd00aba1250f","Type":"ContainerDied","Data":"8358e6de610d29adca602bcccc39fafed818a6d0788975d56bba84a9fd961e4e"} Jan 30 22:12:43 crc kubenswrapper[4751]: I0130 22:12:43.001753 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb8v7" event={"ID":"b2702e38-6753-43af-9a56-dd00aba1250f","Type":"ContainerStarted","Data":"195d1f858b0e3cc01f13d0b88e5e1c66c4de1476d6822d7498bfaf61eee3328d"} Jan 30 22:12:43 crc kubenswrapper[4751]: I0130 22:12:43.029495 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wb8v7" podStartSLOduration=3.605691608 podStartE2EDuration="7.029476549s" podCreationTimestamp="2026-01-30 22:12:36 +0000 UTC" firstStartedPulling="2026-01-30 22:12:38.957032584 +0000 UTC m=+3497.702855233" lastFinishedPulling="2026-01-30 22:12:42.380817525 +0000 UTC m=+3501.126640174" observedRunningTime="2026-01-30 22:12:43.021549666 +0000 UTC m=+3501.767372315" watchObservedRunningTime="2026-01-30 22:12:43.029476549 +0000 UTC m=+3501.775299198" Jan 30 22:12:47 crc kubenswrapper[4751]: I0130 22:12:47.135134 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:47 crc kubenswrapper[4751]: I0130 22:12:47.135730 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:47 crc kubenswrapper[4751]: I0130 22:12:47.187270 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:48 crc kubenswrapper[4751]: I0130 22:12:48.116852 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:48 crc kubenswrapper[4751]: I0130 22:12:48.428251 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wb8v7"] Jan 30 22:12:50 crc kubenswrapper[4751]: I0130 22:12:50.074276 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wb8v7" podUID="b2702e38-6753-43af-9a56-dd00aba1250f" containerName="registry-server" containerID="cri-o://195d1f858b0e3cc01f13d0b88e5e1c66c4de1476d6822d7498bfaf61eee3328d" gracePeriod=2 Jan 30 22:12:50 crc kubenswrapper[4751]: I0130 22:12:50.600003 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:50 crc kubenswrapper[4751]: I0130 22:12:50.710069 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2702e38-6753-43af-9a56-dd00aba1250f-utilities\") pod \"b2702e38-6753-43af-9a56-dd00aba1250f\" (UID: \"b2702e38-6753-43af-9a56-dd00aba1250f\") " Jan 30 22:12:50 crc kubenswrapper[4751]: I0130 22:12:50.710754 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zhm2\" (UniqueName: \"kubernetes.io/projected/b2702e38-6753-43af-9a56-dd00aba1250f-kube-api-access-2zhm2\") pod \"b2702e38-6753-43af-9a56-dd00aba1250f\" (UID: \"b2702e38-6753-43af-9a56-dd00aba1250f\") " Jan 30 22:12:50 crc kubenswrapper[4751]: I0130 22:12:50.710785 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2702e38-6753-43af-9a56-dd00aba1250f-catalog-content\") pod \"b2702e38-6753-43af-9a56-dd00aba1250f\" (UID: \"b2702e38-6753-43af-9a56-dd00aba1250f\") " Jan 30 22:12:50 crc kubenswrapper[4751]: I0130 22:12:50.710911 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2702e38-6753-43af-9a56-dd00aba1250f-utilities" (OuterVolumeSpecName: "utilities") pod "b2702e38-6753-43af-9a56-dd00aba1250f" (UID: "b2702e38-6753-43af-9a56-dd00aba1250f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:12:50 crc kubenswrapper[4751]: I0130 22:12:50.711668 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2702e38-6753-43af-9a56-dd00aba1250f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:12:50 crc kubenswrapper[4751]: I0130 22:12:50.716911 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2702e38-6753-43af-9a56-dd00aba1250f-kube-api-access-2zhm2" (OuterVolumeSpecName: "kube-api-access-2zhm2") pod "b2702e38-6753-43af-9a56-dd00aba1250f" (UID: "b2702e38-6753-43af-9a56-dd00aba1250f"). InnerVolumeSpecName "kube-api-access-2zhm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:12:50 crc kubenswrapper[4751]: I0130 22:12:50.753765 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2702e38-6753-43af-9a56-dd00aba1250f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2702e38-6753-43af-9a56-dd00aba1250f" (UID: "b2702e38-6753-43af-9a56-dd00aba1250f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:12:50 crc kubenswrapper[4751]: I0130 22:12:50.814496 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zhm2\" (UniqueName: \"kubernetes.io/projected/b2702e38-6753-43af-9a56-dd00aba1250f-kube-api-access-2zhm2\") on node \"crc\" DevicePath \"\"" Jan 30 22:12:50 crc kubenswrapper[4751]: I0130 22:12:50.814551 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2702e38-6753-43af-9a56-dd00aba1250f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.087224 4751 generic.go:334] "Generic (PLEG): container finished" podID="b2702e38-6753-43af-9a56-dd00aba1250f" containerID="195d1f858b0e3cc01f13d0b88e5e1c66c4de1476d6822d7498bfaf61eee3328d" exitCode=0 Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.087276 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb8v7" event={"ID":"b2702e38-6753-43af-9a56-dd00aba1250f","Type":"ContainerDied","Data":"195d1f858b0e3cc01f13d0b88e5e1c66c4de1476d6822d7498bfaf61eee3328d"} Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.087304 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.087340 4751 scope.go:117] "RemoveContainer" containerID="195d1f858b0e3cc01f13d0b88e5e1c66c4de1476d6822d7498bfaf61eee3328d" Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.087311 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb8v7" event={"ID":"b2702e38-6753-43af-9a56-dd00aba1250f","Type":"ContainerDied","Data":"5440b0621535fecd6a06b59ed63d2ee9b603afd691e3da53d98973f03fbc2cc2"} Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.129688 4751 scope.go:117] "RemoveContainer" containerID="8358e6de610d29adca602bcccc39fafed818a6d0788975d56bba84a9fd961e4e" Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.142973 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wb8v7"] Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.156583 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wb8v7"] Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.160258 4751 scope.go:117] "RemoveContainer" containerID="91e2d29c10894d3ddeaff53f76d5e7bdb749e03be9cc503526e9d3a57733919d" Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.219536 4751 scope.go:117] "RemoveContainer" containerID="195d1f858b0e3cc01f13d0b88e5e1c66c4de1476d6822d7498bfaf61eee3328d" Jan 30 22:12:51 crc kubenswrapper[4751]: E0130 22:12:51.220117 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"195d1f858b0e3cc01f13d0b88e5e1c66c4de1476d6822d7498bfaf61eee3328d\": container with ID starting with 195d1f858b0e3cc01f13d0b88e5e1c66c4de1476d6822d7498bfaf61eee3328d not found: ID does not exist" containerID="195d1f858b0e3cc01f13d0b88e5e1c66c4de1476d6822d7498bfaf61eee3328d" Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.220156 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"195d1f858b0e3cc01f13d0b88e5e1c66c4de1476d6822d7498bfaf61eee3328d"} err="failed to get container status \"195d1f858b0e3cc01f13d0b88e5e1c66c4de1476d6822d7498bfaf61eee3328d\": rpc error: code = NotFound desc = could not find container \"195d1f858b0e3cc01f13d0b88e5e1c66c4de1476d6822d7498bfaf61eee3328d\": container with ID starting with 195d1f858b0e3cc01f13d0b88e5e1c66c4de1476d6822d7498bfaf61eee3328d not found: ID does not exist" Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.220181 4751 scope.go:117] "RemoveContainer" containerID="8358e6de610d29adca602bcccc39fafed818a6d0788975d56bba84a9fd961e4e" Jan 30 22:12:51 crc kubenswrapper[4751]: E0130 22:12:51.220609 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8358e6de610d29adca602bcccc39fafed818a6d0788975d56bba84a9fd961e4e\": container with ID starting with 8358e6de610d29adca602bcccc39fafed818a6d0788975d56bba84a9fd961e4e not found: ID does not exist" containerID="8358e6de610d29adca602bcccc39fafed818a6d0788975d56bba84a9fd961e4e" Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.220635 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8358e6de610d29adca602bcccc39fafed818a6d0788975d56bba84a9fd961e4e"} err="failed to get container status \"8358e6de610d29adca602bcccc39fafed818a6d0788975d56bba84a9fd961e4e\": rpc error: code = NotFound desc = could not find container \"8358e6de610d29adca602bcccc39fafed818a6d0788975d56bba84a9fd961e4e\": container with ID starting with 8358e6de610d29adca602bcccc39fafed818a6d0788975d56bba84a9fd961e4e not found: ID does not exist" Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.220649 4751 scope.go:117] "RemoveContainer" containerID="91e2d29c10894d3ddeaff53f76d5e7bdb749e03be9cc503526e9d3a57733919d" Jan 30 22:12:51 crc kubenswrapper[4751]: E0130 22:12:51.220958 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91e2d29c10894d3ddeaff53f76d5e7bdb749e03be9cc503526e9d3a57733919d\": container with ID starting with 91e2d29c10894d3ddeaff53f76d5e7bdb749e03be9cc503526e9d3a57733919d not found: ID does not exist" containerID="91e2d29c10894d3ddeaff53f76d5e7bdb749e03be9cc503526e9d3a57733919d" Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.221016 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91e2d29c10894d3ddeaff53f76d5e7bdb749e03be9cc503526e9d3a57733919d"} err="failed to get container status \"91e2d29c10894d3ddeaff53f76d5e7bdb749e03be9cc503526e9d3a57733919d\": rpc error: code = NotFound desc = could not find container \"91e2d29c10894d3ddeaff53f76d5e7bdb749e03be9cc503526e9d3a57733919d\": container with ID starting with 91e2d29c10894d3ddeaff53f76d5e7bdb749e03be9cc503526e9d3a57733919d not found: ID does not exist" Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.991862 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2702e38-6753-43af-9a56-dd00aba1250f" path="/var/lib/kubelet/pods/b2702e38-6753-43af-9a56-dd00aba1250f/volumes" Jan 30 22:12:54 crc kubenswrapper[4751]: I0130 22:12:54.127345 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:12:54 crc kubenswrapper[4751]: I0130 22:12:54.128446 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:13:24 crc kubenswrapper[4751]: I0130 22:13:24.126568 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:13:24 crc kubenswrapper[4751]: I0130 22:13:24.127204 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:13:54 crc kubenswrapper[4751]: I0130 22:13:54.126739 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:13:54 crc kubenswrapper[4751]: I0130 22:13:54.127370 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:13:54 crc kubenswrapper[4751]: I0130 22:13:54.127434 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 22:13:54 crc kubenswrapper[4751]: I0130 22:13:54.128402 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:13:54 crc kubenswrapper[4751]: I0130 22:13:54.128461 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" gracePeriod=600 Jan 30 22:13:54 crc kubenswrapper[4751]: E0130 22:13:54.263459 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:13:54 crc kubenswrapper[4751]: I0130 22:13:54.745997 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" exitCode=0 Jan 30 22:13:54 crc kubenswrapper[4751]: I0130 22:13:54.746043 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a"} Jan 30 22:13:54 crc kubenswrapper[4751]: I0130 22:13:54.746075 4751 scope.go:117] "RemoveContainer" containerID="b7204a414860b1d9f7ebaccba0c3c85f4ccaeeed68090f146baeabd5dcaab619" Jan 30 22:13:54 crc kubenswrapper[4751]: I0130 22:13:54.747233 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:13:54 crc kubenswrapper[4751]: E0130 22:13:54.747765 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.041886 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c27dd"] Jan 30 22:14:03 crc kubenswrapper[4751]: E0130 22:14:03.044109 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2702e38-6753-43af-9a56-dd00aba1250f" containerName="extract-utilities" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.044130 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2702e38-6753-43af-9a56-dd00aba1250f" containerName="extract-utilities" Jan 30 22:14:03 crc kubenswrapper[4751]: E0130 22:14:03.044179 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2702e38-6753-43af-9a56-dd00aba1250f" containerName="registry-server" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.044189 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2702e38-6753-43af-9a56-dd00aba1250f" containerName="registry-server" Jan 30 22:14:03 crc kubenswrapper[4751]: E0130 22:14:03.044242 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2702e38-6753-43af-9a56-dd00aba1250f" containerName="extract-content" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.044252 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2702e38-6753-43af-9a56-dd00aba1250f" containerName="extract-content" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.045102 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2702e38-6753-43af-9a56-dd00aba1250f" containerName="registry-server" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.059803 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.065803 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c27dd"] Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.236999 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b95606b5-59a9-4df1-8aff-012ba61fe3ed-catalog-content\") pod \"redhat-marketplace-c27dd\" (UID: \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\") " pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.237554 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkgj6\" (UniqueName: \"kubernetes.io/projected/b95606b5-59a9-4df1-8aff-012ba61fe3ed-kube-api-access-hkgj6\") pod \"redhat-marketplace-c27dd\" (UID: \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\") " pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.237726 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b95606b5-59a9-4df1-8aff-012ba61fe3ed-utilities\") pod \"redhat-marketplace-c27dd\" (UID: \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\") " pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.339462 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkgj6\" (UniqueName: \"kubernetes.io/projected/b95606b5-59a9-4df1-8aff-012ba61fe3ed-kube-api-access-hkgj6\") pod \"redhat-marketplace-c27dd\" (UID: \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\") " pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.339548 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b95606b5-59a9-4df1-8aff-012ba61fe3ed-utilities\") pod \"redhat-marketplace-c27dd\" (UID: \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\") " pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.339634 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b95606b5-59a9-4df1-8aff-012ba61fe3ed-catalog-content\") pod \"redhat-marketplace-c27dd\" (UID: \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\") " pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.340082 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b95606b5-59a9-4df1-8aff-012ba61fe3ed-catalog-content\") pod \"redhat-marketplace-c27dd\" (UID: \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\") " pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.340159 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b95606b5-59a9-4df1-8aff-012ba61fe3ed-utilities\") pod \"redhat-marketplace-c27dd\" (UID: \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\") " pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.362576 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkgj6\" (UniqueName: \"kubernetes.io/projected/b95606b5-59a9-4df1-8aff-012ba61fe3ed-kube-api-access-hkgj6\") pod \"redhat-marketplace-c27dd\" (UID: \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\") " pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.398993 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:03 crc kubenswrapper[4751]: W0130 22:14:03.926872 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb95606b5_59a9_4df1_8aff_012ba61fe3ed.slice/crio-810add7a4a9f5890988c7d980249519e20731df697bfe1df04466ec004b1d204 WatchSource:0}: Error finding container 810add7a4a9f5890988c7d980249519e20731df697bfe1df04466ec004b1d204: Status 404 returned error can't find the container with id 810add7a4a9f5890988c7d980249519e20731df697bfe1df04466ec004b1d204 Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.934297 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c27dd"] Jan 30 22:14:04 crc kubenswrapper[4751]: I0130 22:14:04.870738 4751 generic.go:334] "Generic (PLEG): container finished" podID="b95606b5-59a9-4df1-8aff-012ba61fe3ed" containerID="7b8c200317023ad4ccf9f35bb56c063f562f5cf335a011cb43c010d3dc0cda05" exitCode=0 Jan 30 22:14:04 crc kubenswrapper[4751]: I0130 22:14:04.870787 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c27dd" event={"ID":"b95606b5-59a9-4df1-8aff-012ba61fe3ed","Type":"ContainerDied","Data":"7b8c200317023ad4ccf9f35bb56c063f562f5cf335a011cb43c010d3dc0cda05"} Jan 30 22:14:04 crc kubenswrapper[4751]: I0130 22:14:04.870820 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c27dd" event={"ID":"b95606b5-59a9-4df1-8aff-012ba61fe3ed","Type":"ContainerStarted","Data":"810add7a4a9f5890988c7d980249519e20731df697bfe1df04466ec004b1d204"} Jan 30 22:14:05 crc kubenswrapper[4751]: I0130 22:14:05.976704 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:14:05 crc kubenswrapper[4751]: E0130 22:14:05.977244 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:14:06 crc kubenswrapper[4751]: I0130 22:14:06.899504 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c27dd" event={"ID":"b95606b5-59a9-4df1-8aff-012ba61fe3ed","Type":"ContainerStarted","Data":"f6e0dbdbac04ff3499e1728e3cf7185d18e21183eb3203fc3c8f15b2c633049a"} Jan 30 22:14:07 crc kubenswrapper[4751]: I0130 22:14:07.912015 4751 generic.go:334] "Generic (PLEG): container finished" podID="b95606b5-59a9-4df1-8aff-012ba61fe3ed" containerID="f6e0dbdbac04ff3499e1728e3cf7185d18e21183eb3203fc3c8f15b2c633049a" exitCode=0 Jan 30 22:14:07 crc kubenswrapper[4751]: I0130 22:14:07.912070 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c27dd" event={"ID":"b95606b5-59a9-4df1-8aff-012ba61fe3ed","Type":"ContainerDied","Data":"f6e0dbdbac04ff3499e1728e3cf7185d18e21183eb3203fc3c8f15b2c633049a"} Jan 30 22:14:09 crc kubenswrapper[4751]: I0130 22:14:09.937461 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c27dd" event={"ID":"b95606b5-59a9-4df1-8aff-012ba61fe3ed","Type":"ContainerStarted","Data":"3e41b5a8ab7c807e7af45d1b8d9f2bf21f61834e2d0999cbcf001c284438342e"} Jan 30 22:14:09 crc kubenswrapper[4751]: I0130 22:14:09.961316 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c27dd" podStartSLOduration=3.873274584 podStartE2EDuration="7.961291089s" podCreationTimestamp="2026-01-30 22:14:02 +0000 UTC" firstStartedPulling="2026-01-30 22:14:04.873079454 +0000 UTC m=+3583.618902103" lastFinishedPulling="2026-01-30 22:14:08.961095959 +0000 UTC m=+3587.706918608" observedRunningTime="2026-01-30 22:14:09.956169872 +0000 UTC m=+3588.701992531" watchObservedRunningTime="2026-01-30 22:14:09.961291089 +0000 UTC m=+3588.707113738" Jan 30 22:14:13 crc kubenswrapper[4751]: I0130 22:14:13.399351 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:13 crc kubenswrapper[4751]: I0130 22:14:13.399903 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:13 crc kubenswrapper[4751]: I0130 22:14:13.462923 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:19 crc kubenswrapper[4751]: I0130 22:14:19.977981 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:14:19 crc kubenswrapper[4751]: E0130 22:14:19.979142 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:14:23 crc kubenswrapper[4751]: I0130 22:14:23.453727 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:23 crc kubenswrapper[4751]: I0130 22:14:23.508788 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c27dd"] Jan 30 22:14:24 crc kubenswrapper[4751]: I0130 22:14:24.081288 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c27dd" podUID="b95606b5-59a9-4df1-8aff-012ba61fe3ed" containerName="registry-server" containerID="cri-o://3e41b5a8ab7c807e7af45d1b8d9f2bf21f61834e2d0999cbcf001c284438342e" gracePeriod=2 Jan 30 22:14:24 crc kubenswrapper[4751]: I0130 22:14:24.612744 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:24 crc kubenswrapper[4751]: I0130 22:14:24.774784 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b95606b5-59a9-4df1-8aff-012ba61fe3ed-utilities\") pod \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\" (UID: \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\") " Jan 30 22:14:24 crc kubenswrapper[4751]: I0130 22:14:24.774926 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkgj6\" (UniqueName: \"kubernetes.io/projected/b95606b5-59a9-4df1-8aff-012ba61fe3ed-kube-api-access-hkgj6\") pod \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\" (UID: \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\") " Jan 30 22:14:24 crc kubenswrapper[4751]: I0130 22:14:24.774970 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b95606b5-59a9-4df1-8aff-012ba61fe3ed-catalog-content\") pod \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\" (UID: \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\") " Jan 30 22:14:24 crc kubenswrapper[4751]: I0130 22:14:24.776025 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b95606b5-59a9-4df1-8aff-012ba61fe3ed-utilities" (OuterVolumeSpecName: "utilities") pod "b95606b5-59a9-4df1-8aff-012ba61fe3ed" (UID: "b95606b5-59a9-4df1-8aff-012ba61fe3ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:14:24 crc kubenswrapper[4751]: I0130 22:14:24.783761 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b95606b5-59a9-4df1-8aff-012ba61fe3ed-kube-api-access-hkgj6" (OuterVolumeSpecName: "kube-api-access-hkgj6") pod "b95606b5-59a9-4df1-8aff-012ba61fe3ed" (UID: "b95606b5-59a9-4df1-8aff-012ba61fe3ed"). InnerVolumeSpecName "kube-api-access-hkgj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:14:24 crc kubenswrapper[4751]: I0130 22:14:24.807577 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b95606b5-59a9-4df1-8aff-012ba61fe3ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b95606b5-59a9-4df1-8aff-012ba61fe3ed" (UID: "b95606b5-59a9-4df1-8aff-012ba61fe3ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:14:24 crc kubenswrapper[4751]: I0130 22:14:24.877917 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b95606b5-59a9-4df1-8aff-012ba61fe3ed-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:14:24 crc kubenswrapper[4751]: I0130 22:14:24.877959 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkgj6\" (UniqueName: \"kubernetes.io/projected/b95606b5-59a9-4df1-8aff-012ba61fe3ed-kube-api-access-hkgj6\") on node \"crc\" DevicePath \"\"" Jan 30 22:14:24 crc kubenswrapper[4751]: I0130 22:14:24.877971 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b95606b5-59a9-4df1-8aff-012ba61fe3ed-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.096230 4751 generic.go:334] "Generic (PLEG): container finished" podID="b95606b5-59a9-4df1-8aff-012ba61fe3ed" containerID="3e41b5a8ab7c807e7af45d1b8d9f2bf21f61834e2d0999cbcf001c284438342e" exitCode=0 Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.096289 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.096357 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c27dd" event={"ID":"b95606b5-59a9-4df1-8aff-012ba61fe3ed","Type":"ContainerDied","Data":"3e41b5a8ab7c807e7af45d1b8d9f2bf21f61834e2d0999cbcf001c284438342e"} Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.096702 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c27dd" event={"ID":"b95606b5-59a9-4df1-8aff-012ba61fe3ed","Type":"ContainerDied","Data":"810add7a4a9f5890988c7d980249519e20731df697bfe1df04466ec004b1d204"} Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.096728 4751 scope.go:117] "RemoveContainer" containerID="3e41b5a8ab7c807e7af45d1b8d9f2bf21f61834e2d0999cbcf001c284438342e" Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.124979 4751 scope.go:117] "RemoveContainer" containerID="f6e0dbdbac04ff3499e1728e3cf7185d18e21183eb3203fc3c8f15b2c633049a" Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.140451 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c27dd"] Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.152647 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c27dd"] Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.164950 4751 scope.go:117] "RemoveContainer" containerID="7b8c200317023ad4ccf9f35bb56c063f562f5cf335a011cb43c010d3dc0cda05" Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.239475 4751 scope.go:117] "RemoveContainer" containerID="3e41b5a8ab7c807e7af45d1b8d9f2bf21f61834e2d0999cbcf001c284438342e" Jan 30 22:14:25 crc kubenswrapper[4751]: E0130 22:14:25.239910 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e41b5a8ab7c807e7af45d1b8d9f2bf21f61834e2d0999cbcf001c284438342e\": container with ID starting with 3e41b5a8ab7c807e7af45d1b8d9f2bf21f61834e2d0999cbcf001c284438342e not found: ID does not exist" containerID="3e41b5a8ab7c807e7af45d1b8d9f2bf21f61834e2d0999cbcf001c284438342e" Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.239957 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e41b5a8ab7c807e7af45d1b8d9f2bf21f61834e2d0999cbcf001c284438342e"} err="failed to get container status \"3e41b5a8ab7c807e7af45d1b8d9f2bf21f61834e2d0999cbcf001c284438342e\": rpc error: code = NotFound desc = could not find container \"3e41b5a8ab7c807e7af45d1b8d9f2bf21f61834e2d0999cbcf001c284438342e\": container with ID starting with 3e41b5a8ab7c807e7af45d1b8d9f2bf21f61834e2d0999cbcf001c284438342e not found: ID does not exist" Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.239986 4751 scope.go:117] "RemoveContainer" containerID="f6e0dbdbac04ff3499e1728e3cf7185d18e21183eb3203fc3c8f15b2c633049a" Jan 30 22:14:25 crc kubenswrapper[4751]: E0130 22:14:25.240240 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6e0dbdbac04ff3499e1728e3cf7185d18e21183eb3203fc3c8f15b2c633049a\": container with ID starting with f6e0dbdbac04ff3499e1728e3cf7185d18e21183eb3203fc3c8f15b2c633049a not found: ID does not exist" containerID="f6e0dbdbac04ff3499e1728e3cf7185d18e21183eb3203fc3c8f15b2c633049a" Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.240274 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6e0dbdbac04ff3499e1728e3cf7185d18e21183eb3203fc3c8f15b2c633049a"} err="failed to get container status \"f6e0dbdbac04ff3499e1728e3cf7185d18e21183eb3203fc3c8f15b2c633049a\": rpc error: code = NotFound desc = could not find container \"f6e0dbdbac04ff3499e1728e3cf7185d18e21183eb3203fc3c8f15b2c633049a\": container with ID starting with f6e0dbdbac04ff3499e1728e3cf7185d18e21183eb3203fc3c8f15b2c633049a not found: ID does not exist" Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.240290 4751 scope.go:117] "RemoveContainer" containerID="7b8c200317023ad4ccf9f35bb56c063f562f5cf335a011cb43c010d3dc0cda05" Jan 30 22:14:25 crc kubenswrapper[4751]: E0130 22:14:25.240803 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b8c200317023ad4ccf9f35bb56c063f562f5cf335a011cb43c010d3dc0cda05\": container with ID starting with 7b8c200317023ad4ccf9f35bb56c063f562f5cf335a011cb43c010d3dc0cda05 not found: ID does not exist" containerID="7b8c200317023ad4ccf9f35bb56c063f562f5cf335a011cb43c010d3dc0cda05" Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.240825 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b8c200317023ad4ccf9f35bb56c063f562f5cf335a011cb43c010d3dc0cda05"} err="failed to get container status \"7b8c200317023ad4ccf9f35bb56c063f562f5cf335a011cb43c010d3dc0cda05\": rpc error: code = NotFound desc = could not find container \"7b8c200317023ad4ccf9f35bb56c063f562f5cf335a011cb43c010d3dc0cda05\": container with ID starting with 7b8c200317023ad4ccf9f35bb56c063f562f5cf335a011cb43c010d3dc0cda05 not found: ID does not exist" Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.987148 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b95606b5-59a9-4df1-8aff-012ba61fe3ed" path="/var/lib/kubelet/pods/b95606b5-59a9-4df1-8aff-012ba61fe3ed/volumes" Jan 30 22:14:33 crc kubenswrapper[4751]: I0130 22:14:33.976466 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:14:33 crc kubenswrapper[4751]: E0130 22:14:33.977434 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:14:48 crc kubenswrapper[4751]: I0130 22:14:48.976183 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:14:48 crc kubenswrapper[4751]: E0130 22:14:48.976942 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.175098 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m"] Jan 30 22:15:00 crc kubenswrapper[4751]: E0130 22:15:00.176299 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b95606b5-59a9-4df1-8aff-012ba61fe3ed" containerName="extract-utilities" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.176317 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b95606b5-59a9-4df1-8aff-012ba61fe3ed" containerName="extract-utilities" Jan 30 22:15:00 crc kubenswrapper[4751]: E0130 22:15:00.176359 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b95606b5-59a9-4df1-8aff-012ba61fe3ed" containerName="registry-server" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.176369 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b95606b5-59a9-4df1-8aff-012ba61fe3ed" containerName="registry-server" Jan 30 22:15:00 crc kubenswrapper[4751]: E0130 22:15:00.176383 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b95606b5-59a9-4df1-8aff-012ba61fe3ed" containerName="extract-content" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.176390 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b95606b5-59a9-4df1-8aff-012ba61fe3ed" containerName="extract-content" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.176705 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="b95606b5-59a9-4df1-8aff-012ba61fe3ed" containerName="registry-server" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.177796 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.180082 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.181009 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.188608 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m"] Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.286986 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5pz5\" (UniqueName: \"kubernetes.io/projected/3f9671fd-4ee5-4071-8dd4-86a335928d79-kube-api-access-d5pz5\") pod \"collect-profiles-29496855-ncq7m\" (UID: \"3f9671fd-4ee5-4071-8dd4-86a335928d79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.287264 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f9671fd-4ee5-4071-8dd4-86a335928d79-config-volume\") pod \"collect-profiles-29496855-ncq7m\" (UID: \"3f9671fd-4ee5-4071-8dd4-86a335928d79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.287350 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f9671fd-4ee5-4071-8dd4-86a335928d79-secret-volume\") pod \"collect-profiles-29496855-ncq7m\" (UID: \"3f9671fd-4ee5-4071-8dd4-86a335928d79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.390274 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f9671fd-4ee5-4071-8dd4-86a335928d79-config-volume\") pod \"collect-profiles-29496855-ncq7m\" (UID: \"3f9671fd-4ee5-4071-8dd4-86a335928d79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.390736 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f9671fd-4ee5-4071-8dd4-86a335928d79-secret-volume\") pod \"collect-profiles-29496855-ncq7m\" (UID: \"3f9671fd-4ee5-4071-8dd4-86a335928d79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.391003 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5pz5\" (UniqueName: \"kubernetes.io/projected/3f9671fd-4ee5-4071-8dd4-86a335928d79-kube-api-access-d5pz5\") pod \"collect-profiles-29496855-ncq7m\" (UID: \"3f9671fd-4ee5-4071-8dd4-86a335928d79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.391436 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f9671fd-4ee5-4071-8dd4-86a335928d79-config-volume\") pod \"collect-profiles-29496855-ncq7m\" (UID: \"3f9671fd-4ee5-4071-8dd4-86a335928d79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.397867 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f9671fd-4ee5-4071-8dd4-86a335928d79-secret-volume\") pod \"collect-profiles-29496855-ncq7m\" (UID: \"3f9671fd-4ee5-4071-8dd4-86a335928d79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.408694 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5pz5\" (UniqueName: \"kubernetes.io/projected/3f9671fd-4ee5-4071-8dd4-86a335928d79-kube-api-access-d5pz5\") pod \"collect-profiles-29496855-ncq7m\" (UID: \"3f9671fd-4ee5-4071-8dd4-86a335928d79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.511992 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.992519 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m"] Jan 30 22:15:01 crc kubenswrapper[4751]: I0130 22:15:01.467534 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" event={"ID":"3f9671fd-4ee5-4071-8dd4-86a335928d79","Type":"ContainerStarted","Data":"17631e0b0228d44951b111801652ba8aead8eab296a100d22a49d18b40b57ded"} Jan 30 22:15:01 crc kubenswrapper[4751]: I0130 22:15:01.467602 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" event={"ID":"3f9671fd-4ee5-4071-8dd4-86a335928d79","Type":"ContainerStarted","Data":"de23f390426cd77b0e6c6fe987b57369c253861ad994659a19edd6c3ffecb670"} Jan 30 22:15:01 crc kubenswrapper[4751]: I0130 22:15:01.491528 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" podStartSLOduration=1.491506837 podStartE2EDuration="1.491506837s" podCreationTimestamp="2026-01-30 22:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:15:01.482892995 +0000 UTC m=+3640.228715674" watchObservedRunningTime="2026-01-30 22:15:01.491506837 +0000 UTC m=+3640.237329486" Jan 30 22:15:02 crc kubenswrapper[4751]: I0130 22:15:02.478562 4751 generic.go:334] "Generic (PLEG): container finished" podID="3f9671fd-4ee5-4071-8dd4-86a335928d79" containerID="17631e0b0228d44951b111801652ba8aead8eab296a100d22a49d18b40b57ded" exitCode=0 Jan 30 22:15:02 crc kubenswrapper[4751]: I0130 22:15:02.478669 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" event={"ID":"3f9671fd-4ee5-4071-8dd4-86a335928d79","Type":"ContainerDied","Data":"17631e0b0228d44951b111801652ba8aead8eab296a100d22a49d18b40b57ded"} Jan 30 22:15:02 crc kubenswrapper[4751]: I0130 22:15:02.976547 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:15:02 crc kubenswrapper[4751]: E0130 22:15:02.977006 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:15:03 crc kubenswrapper[4751]: I0130 22:15:03.917381 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" Jan 30 22:15:03 crc kubenswrapper[4751]: I0130 22:15:03.975904 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f9671fd-4ee5-4071-8dd4-86a335928d79-config-volume\") pod \"3f9671fd-4ee5-4071-8dd4-86a335928d79\" (UID: \"3f9671fd-4ee5-4071-8dd4-86a335928d79\") " Jan 30 22:15:03 crc kubenswrapper[4751]: I0130 22:15:03.976223 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f9671fd-4ee5-4071-8dd4-86a335928d79-secret-volume\") pod \"3f9671fd-4ee5-4071-8dd4-86a335928d79\" (UID: \"3f9671fd-4ee5-4071-8dd4-86a335928d79\") " Jan 30 22:15:03 crc kubenswrapper[4751]: I0130 22:15:03.976411 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5pz5\" (UniqueName: \"kubernetes.io/projected/3f9671fd-4ee5-4071-8dd4-86a335928d79-kube-api-access-d5pz5\") pod \"3f9671fd-4ee5-4071-8dd4-86a335928d79\" (UID: \"3f9671fd-4ee5-4071-8dd4-86a335928d79\") " Jan 30 22:15:03 crc kubenswrapper[4751]: I0130 22:15:03.976778 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f9671fd-4ee5-4071-8dd4-86a335928d79-config-volume" (OuterVolumeSpecName: "config-volume") pod "3f9671fd-4ee5-4071-8dd4-86a335928d79" (UID: "3f9671fd-4ee5-4071-8dd4-86a335928d79"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:15:03 crc kubenswrapper[4751]: I0130 22:15:03.977431 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f9671fd-4ee5-4071-8dd4-86a335928d79-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:15:03 crc kubenswrapper[4751]: I0130 22:15:03.988854 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f9671fd-4ee5-4071-8dd4-86a335928d79-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3f9671fd-4ee5-4071-8dd4-86a335928d79" (UID: "3f9671fd-4ee5-4071-8dd4-86a335928d79"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:15:03 crc kubenswrapper[4751]: I0130 22:15:03.988912 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f9671fd-4ee5-4071-8dd4-86a335928d79-kube-api-access-d5pz5" (OuterVolumeSpecName: "kube-api-access-d5pz5") pod "3f9671fd-4ee5-4071-8dd4-86a335928d79" (UID: "3f9671fd-4ee5-4071-8dd4-86a335928d79"). InnerVolumeSpecName "kube-api-access-d5pz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:15:04 crc kubenswrapper[4751]: I0130 22:15:04.082046 4751 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f9671fd-4ee5-4071-8dd4-86a335928d79-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:15:04 crc kubenswrapper[4751]: I0130 22:15:04.082707 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5pz5\" (UniqueName: \"kubernetes.io/projected/3f9671fd-4ee5-4071-8dd4-86a335928d79-kube-api-access-d5pz5\") on node \"crc\" DevicePath \"\"" Jan 30 22:15:04 crc kubenswrapper[4751]: I0130 22:15:04.498932 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" event={"ID":"3f9671fd-4ee5-4071-8dd4-86a335928d79","Type":"ContainerDied","Data":"de23f390426cd77b0e6c6fe987b57369c253861ad994659a19edd6c3ffecb670"} Jan 30 22:15:04 crc kubenswrapper[4751]: I0130 22:15:04.498973 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de23f390426cd77b0e6c6fe987b57369c253861ad994659a19edd6c3ffecb670" Jan 30 22:15:04 crc kubenswrapper[4751]: I0130 22:15:04.499024 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" Jan 30 22:15:04 crc kubenswrapper[4751]: I0130 22:15:04.569217 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8"] Jan 30 22:15:04 crc kubenswrapper[4751]: I0130 22:15:04.587208 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8"] Jan 30 22:15:06 crc kubenswrapper[4751]: I0130 22:15:06.001229 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6" path="/var/lib/kubelet/pods/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6/volumes" Jan 30 22:15:14 crc kubenswrapper[4751]: I0130 22:15:14.976662 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:15:14 crc kubenswrapper[4751]: E0130 22:15:14.977677 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:15:28 crc kubenswrapper[4751]: I0130 22:15:28.357087 4751 scope.go:117] "RemoveContainer" containerID="664bdfce98a1e87d41664b73e411b35da3c4e69be04f5631e859fc26af9552e4" Jan 30 22:15:29 crc kubenswrapper[4751]: I0130 22:15:29.976425 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:15:29 crc kubenswrapper[4751]: E0130 22:15:29.982155 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:15:44 crc kubenswrapper[4751]: I0130 22:15:44.976601 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:15:44 crc kubenswrapper[4751]: E0130 22:15:44.977537 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:15:55 crc kubenswrapper[4751]: I0130 22:15:55.978236 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:15:55 crc kubenswrapper[4751]: E0130 22:15:55.979188 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:16:10 crc kubenswrapper[4751]: I0130 22:16:10.977571 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:16:10 crc kubenswrapper[4751]: E0130 22:16:10.978382 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:16:25 crc kubenswrapper[4751]: I0130 22:16:25.976748 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:16:25 crc kubenswrapper[4751]: E0130 22:16:25.977692 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:16:39 crc kubenswrapper[4751]: I0130 22:16:39.975895 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:16:39 crc kubenswrapper[4751]: E0130 22:16:39.976733 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:16:52 crc kubenswrapper[4751]: I0130 22:16:52.976563 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:16:52 crc kubenswrapper[4751]: E0130 22:16:52.977809 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:17:07 crc kubenswrapper[4751]: I0130 22:17:07.976832 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:17:07 crc kubenswrapper[4751]: E0130 22:17:07.978658 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:17:18 crc kubenswrapper[4751]: I0130 22:17:18.976154 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:17:18 crc kubenswrapper[4751]: E0130 22:17:18.976974 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:17:29 crc kubenswrapper[4751]: E0130 22:17:29.248243 4751 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.39:48740->38.102.83.39:41127: write tcp 38.102.83.39:48740->38.102.83.39:41127: write: connection reset by peer Jan 30 22:17:31 crc kubenswrapper[4751]: I0130 22:17:31.986409 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:17:31 crc kubenswrapper[4751]: E0130 22:17:31.987187 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:17:44 crc kubenswrapper[4751]: I0130 22:17:44.976608 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:17:44 crc kubenswrapper[4751]: E0130 22:17:44.977385 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:17:55 crc kubenswrapper[4751]: I0130 22:17:55.976892 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:17:55 crc kubenswrapper[4751]: E0130 22:17:55.977837 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:18:07 crc kubenswrapper[4751]: I0130 22:18:07.976418 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:18:07 crc kubenswrapper[4751]: E0130 22:18:07.977459 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:18:22 crc kubenswrapper[4751]: I0130 22:18:22.975468 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:18:22 crc kubenswrapper[4751]: E0130 22:18:22.976201 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:18:34 crc kubenswrapper[4751]: I0130 22:18:34.976705 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:18:34 crc kubenswrapper[4751]: E0130 22:18:34.977591 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:18:47 crc kubenswrapper[4751]: I0130 22:18:47.976246 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:18:47 crc kubenswrapper[4751]: E0130 22:18:47.977116 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:18:58 crc kubenswrapper[4751]: I0130 22:18:58.977320 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:18:59 crc kubenswrapper[4751]: I0130 22:18:59.248985 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"efd99c7f1a974f0acdc1ce10091a0b2ee7636478bf31291cff8918dfb9474170"} Jan 30 22:20:42 crc kubenswrapper[4751]: I0130 22:20:42.782146 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="d55cd7e5-6799-4e1a-9f3b-a92937aca796" containerName="galera" probeResult="failure" output="command timed out" Jan 30 22:20:42 crc kubenswrapper[4751]: I0130 22:20:42.784222 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="d55cd7e5-6799-4e1a-9f3b-a92937aca796" containerName="galera" probeResult="failure" output="command timed out" Jan 30 22:21:24 crc kubenswrapper[4751]: I0130 22:21:24.126986 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:21:24 crc kubenswrapper[4751]: I0130 22:21:24.127621 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:21:54 crc kubenswrapper[4751]: I0130 22:21:54.126994 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:21:54 crc kubenswrapper[4751]: I0130 22:21:54.127728 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:22:24 crc kubenswrapper[4751]: I0130 22:22:24.126750 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:22:24 crc kubenswrapper[4751]: I0130 22:22:24.127430 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:22:24 crc kubenswrapper[4751]: I0130 22:22:24.127513 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 22:22:24 crc kubenswrapper[4751]: I0130 22:22:24.128841 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"efd99c7f1a974f0acdc1ce10091a0b2ee7636478bf31291cff8918dfb9474170"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:22:24 crc kubenswrapper[4751]: I0130 22:22:24.128950 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://efd99c7f1a974f0acdc1ce10091a0b2ee7636478bf31291cff8918dfb9474170" gracePeriod=600 Jan 30 22:22:25 crc kubenswrapper[4751]: I0130 22:22:25.093421 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="efd99c7f1a974f0acdc1ce10091a0b2ee7636478bf31291cff8918dfb9474170" exitCode=0 Jan 30 22:22:25 crc kubenswrapper[4751]: I0130 22:22:25.093535 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"efd99c7f1a974f0acdc1ce10091a0b2ee7636478bf31291cff8918dfb9474170"} Jan 30 22:22:25 crc kubenswrapper[4751]: I0130 22:22:25.094059 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3"} Jan 30 22:22:25 crc kubenswrapper[4751]: I0130 22:22:25.094089 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.034948 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9gvrx"] Jan 30 22:22:51 crc kubenswrapper[4751]: E0130 22:22:51.036139 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f9671fd-4ee5-4071-8dd4-86a335928d79" containerName="collect-profiles" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.036175 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f9671fd-4ee5-4071-8dd4-86a335928d79" containerName="collect-profiles" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.036512 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f9671fd-4ee5-4071-8dd4-86a335928d79" containerName="collect-profiles" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.038431 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.046926 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9gvrx"] Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.152356 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76270476-be06-47bd-88e3-18ef902b6aba-utilities\") pod \"redhat-operators-9gvrx\" (UID: \"76270476-be06-47bd-88e3-18ef902b6aba\") " pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.152439 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67z9p\" (UniqueName: \"kubernetes.io/projected/76270476-be06-47bd-88e3-18ef902b6aba-kube-api-access-67z9p\") pod \"redhat-operators-9gvrx\" (UID: \"76270476-be06-47bd-88e3-18ef902b6aba\") " pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.153165 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76270476-be06-47bd-88e3-18ef902b6aba-catalog-content\") pod \"redhat-operators-9gvrx\" (UID: \"76270476-be06-47bd-88e3-18ef902b6aba\") " pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.255034 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76270476-be06-47bd-88e3-18ef902b6aba-catalog-content\") pod \"redhat-operators-9gvrx\" (UID: \"76270476-be06-47bd-88e3-18ef902b6aba\") " pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.255178 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76270476-be06-47bd-88e3-18ef902b6aba-utilities\") pod \"redhat-operators-9gvrx\" (UID: \"76270476-be06-47bd-88e3-18ef902b6aba\") " pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.255232 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67z9p\" (UniqueName: \"kubernetes.io/projected/76270476-be06-47bd-88e3-18ef902b6aba-kube-api-access-67z9p\") pod \"redhat-operators-9gvrx\" (UID: \"76270476-be06-47bd-88e3-18ef902b6aba\") " pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.255638 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76270476-be06-47bd-88e3-18ef902b6aba-catalog-content\") pod \"redhat-operators-9gvrx\" (UID: \"76270476-be06-47bd-88e3-18ef902b6aba\") " pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.255691 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76270476-be06-47bd-88e3-18ef902b6aba-utilities\") pod \"redhat-operators-9gvrx\" (UID: \"76270476-be06-47bd-88e3-18ef902b6aba\") " pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.280360 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67z9p\" (UniqueName: \"kubernetes.io/projected/76270476-be06-47bd-88e3-18ef902b6aba-kube-api-access-67z9p\") pod \"redhat-operators-9gvrx\" (UID: \"76270476-be06-47bd-88e3-18ef902b6aba\") " pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.364594 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.944286 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9gvrx"] Jan 30 22:22:52 crc kubenswrapper[4751]: I0130 22:22:52.409576 4751 generic.go:334] "Generic (PLEG): container finished" podID="76270476-be06-47bd-88e3-18ef902b6aba" containerID="3cf9ff5dc7fec647a2a6c33a06905bfd39d55161adcd06daf1b94edbaf2b6074" exitCode=0 Jan 30 22:22:52 crc kubenswrapper[4751]: I0130 22:22:52.409732 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gvrx" event={"ID":"76270476-be06-47bd-88e3-18ef902b6aba","Type":"ContainerDied","Data":"3cf9ff5dc7fec647a2a6c33a06905bfd39d55161adcd06daf1b94edbaf2b6074"} Jan 30 22:22:52 crc kubenswrapper[4751]: I0130 22:22:52.410142 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gvrx" event={"ID":"76270476-be06-47bd-88e3-18ef902b6aba","Type":"ContainerStarted","Data":"14f3d4c75f54bfa75e802384c2da329bc21b119019ff68ea38121fe6f87885fa"} Jan 30 22:22:52 crc kubenswrapper[4751]: I0130 22:22:52.412246 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:22:54 crc kubenswrapper[4751]: I0130 22:22:54.429770 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gvrx" event={"ID":"76270476-be06-47bd-88e3-18ef902b6aba","Type":"ContainerStarted","Data":"a1fd403f46c5dd4ee5eabfee9e4a057221c22e47797fc69ab1c46dbb2103fd20"} Jan 30 22:22:56 crc kubenswrapper[4751]: I0130 22:22:56.427163 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9wpq7"] Jan 30 22:22:56 crc kubenswrapper[4751]: I0130 22:22:56.430517 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:22:56 crc kubenswrapper[4751]: I0130 22:22:56.457275 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9wpq7"] Jan 30 22:22:56 crc kubenswrapper[4751]: I0130 22:22:56.581997 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-catalog-content\") pod \"certified-operators-9wpq7\" (UID: \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\") " pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:22:56 crc kubenswrapper[4751]: I0130 22:22:56.582266 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-utilities\") pod \"certified-operators-9wpq7\" (UID: \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\") " pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:22:56 crc kubenswrapper[4751]: I0130 22:22:56.582555 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxkmk\" (UniqueName: \"kubernetes.io/projected/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-kube-api-access-dxkmk\") pod \"certified-operators-9wpq7\" (UID: \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\") " pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:22:56 crc kubenswrapper[4751]: I0130 22:22:56.685445 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-catalog-content\") pod \"certified-operators-9wpq7\" (UID: \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\") " pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:22:56 crc kubenswrapper[4751]: I0130 22:22:56.685689 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-catalog-content\") pod \"certified-operators-9wpq7\" (UID: \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\") " pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:22:56 crc kubenswrapper[4751]: I0130 22:22:56.685851 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-utilities\") pod \"certified-operators-9wpq7\" (UID: \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\") " pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:22:56 crc kubenswrapper[4751]: I0130 22:22:56.686067 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxkmk\" (UniqueName: \"kubernetes.io/projected/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-kube-api-access-dxkmk\") pod \"certified-operators-9wpq7\" (UID: \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\") " pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:22:56 crc kubenswrapper[4751]: I0130 22:22:56.686246 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-utilities\") pod \"certified-operators-9wpq7\" (UID: \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\") " pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:22:56 crc kubenswrapper[4751]: I0130 22:22:56.709850 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxkmk\" (UniqueName: \"kubernetes.io/projected/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-kube-api-access-dxkmk\") pod \"certified-operators-9wpq7\" (UID: \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\") " pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:22:56 crc kubenswrapper[4751]: I0130 22:22:56.758771 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:22:57 crc kubenswrapper[4751]: I0130 22:22:57.446846 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9wpq7"] Jan 30 22:22:57 crc kubenswrapper[4751]: I0130 22:22:57.472535 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wpq7" event={"ID":"4a46f48d-5ce2-43ef-adb9-56105e6b01d3","Type":"ContainerStarted","Data":"65e1f65905987f1def6751e971ba08713ad56c133e5653d3a02921acceba9c27"} Jan 30 22:22:58 crc kubenswrapper[4751]: I0130 22:22:58.484510 4751 generic.go:334] "Generic (PLEG): container finished" podID="4a46f48d-5ce2-43ef-adb9-56105e6b01d3" containerID="3ae0ae262d04e0f7822e2dafa41890cb070543d03d786466d08b7675efcbf236" exitCode=0 Jan 30 22:22:58 crc kubenswrapper[4751]: I0130 22:22:58.484624 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wpq7" event={"ID":"4a46f48d-5ce2-43ef-adb9-56105e6b01d3","Type":"ContainerDied","Data":"3ae0ae262d04e0f7822e2dafa41890cb070543d03d786466d08b7675efcbf236"} Jan 30 22:22:59 crc kubenswrapper[4751]: I0130 22:22:59.496234 4751 generic.go:334] "Generic (PLEG): container finished" podID="76270476-be06-47bd-88e3-18ef902b6aba" containerID="a1fd403f46c5dd4ee5eabfee9e4a057221c22e47797fc69ab1c46dbb2103fd20" exitCode=0 Jan 30 22:22:59 crc kubenswrapper[4751]: I0130 22:22:59.496320 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gvrx" event={"ID":"76270476-be06-47bd-88e3-18ef902b6aba","Type":"ContainerDied","Data":"a1fd403f46c5dd4ee5eabfee9e4a057221c22e47797fc69ab1c46dbb2103fd20"} Jan 30 22:22:59 crc kubenswrapper[4751]: I0130 22:22:59.498601 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wpq7" event={"ID":"4a46f48d-5ce2-43ef-adb9-56105e6b01d3","Type":"ContainerStarted","Data":"bf200d9f09320a79cc6fdbfd992fb9992198393dafe0d64983903e56f54a3616"} Jan 30 22:23:01 crc kubenswrapper[4751]: I0130 22:23:01.519759 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gvrx" event={"ID":"76270476-be06-47bd-88e3-18ef902b6aba","Type":"ContainerStarted","Data":"eb612396128661ad878d40feb3d1a11fe1a6c655411da65bc7aca745b49f7c58"} Jan 30 22:23:02 crc kubenswrapper[4751]: I0130 22:23:02.531590 4751 generic.go:334] "Generic (PLEG): container finished" podID="4a46f48d-5ce2-43ef-adb9-56105e6b01d3" containerID="bf200d9f09320a79cc6fdbfd992fb9992198393dafe0d64983903e56f54a3616" exitCode=0 Jan 30 22:23:02 crc kubenswrapper[4751]: I0130 22:23:02.531655 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wpq7" event={"ID":"4a46f48d-5ce2-43ef-adb9-56105e6b01d3","Type":"ContainerDied","Data":"bf200d9f09320a79cc6fdbfd992fb9992198393dafe0d64983903e56f54a3616"} Jan 30 22:23:02 crc kubenswrapper[4751]: I0130 22:23:02.561620 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9gvrx" podStartSLOduration=3.6196913840000002 podStartE2EDuration="11.561599389s" podCreationTimestamp="2026-01-30 22:22:51 +0000 UTC" firstStartedPulling="2026-01-30 22:22:52.411745354 +0000 UTC m=+4111.157568003" lastFinishedPulling="2026-01-30 22:23:00.353653359 +0000 UTC m=+4119.099476008" observedRunningTime="2026-01-30 22:23:01.537041462 +0000 UTC m=+4120.282864101" watchObservedRunningTime="2026-01-30 22:23:02.561599389 +0000 UTC m=+4121.307422038" Jan 30 22:23:03 crc kubenswrapper[4751]: I0130 22:23:03.545829 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wpq7" event={"ID":"4a46f48d-5ce2-43ef-adb9-56105e6b01d3","Type":"ContainerStarted","Data":"dd27304293a6c3f230008f3348d34ecb3b9536b94ce4210c835c057dbdd64553"} Jan 30 22:23:03 crc kubenswrapper[4751]: I0130 22:23:03.574919 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9wpq7" podStartSLOduration=2.849440555 podStartE2EDuration="7.57489655s" podCreationTimestamp="2026-01-30 22:22:56 +0000 UTC" firstStartedPulling="2026-01-30 22:22:58.487043686 +0000 UTC m=+4117.232866335" lastFinishedPulling="2026-01-30 22:23:03.212499681 +0000 UTC m=+4121.958322330" observedRunningTime="2026-01-30 22:23:03.567229671 +0000 UTC m=+4122.313052320" watchObservedRunningTime="2026-01-30 22:23:03.57489655 +0000 UTC m=+4122.320719199" Jan 30 22:23:06 crc kubenswrapper[4751]: I0130 22:23:06.759220 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:23:06 crc kubenswrapper[4751]: I0130 22:23:06.759817 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:23:07 crc kubenswrapper[4751]: I0130 22:23:07.812576 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-9wpq7" podUID="4a46f48d-5ce2-43ef-adb9-56105e6b01d3" containerName="registry-server" probeResult="failure" output=< Jan 30 22:23:07 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:23:07 crc kubenswrapper[4751]: > Jan 30 22:23:11 crc kubenswrapper[4751]: I0130 22:23:11.365270 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:23:11 crc kubenswrapper[4751]: I0130 22:23:11.365844 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:23:11 crc kubenswrapper[4751]: I0130 22:23:11.424588 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:23:11 crc kubenswrapper[4751]: I0130 22:23:11.687433 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:23:11 crc kubenswrapper[4751]: I0130 22:23:11.750191 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9gvrx"] Jan 30 22:23:13 crc kubenswrapper[4751]: I0130 22:23:13.657769 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9gvrx" podUID="76270476-be06-47bd-88e3-18ef902b6aba" containerName="registry-server" containerID="cri-o://eb612396128661ad878d40feb3d1a11fe1a6c655411da65bc7aca745b49f7c58" gracePeriod=2 Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.206168 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.342720 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67z9p\" (UniqueName: \"kubernetes.io/projected/76270476-be06-47bd-88e3-18ef902b6aba-kube-api-access-67z9p\") pod \"76270476-be06-47bd-88e3-18ef902b6aba\" (UID: \"76270476-be06-47bd-88e3-18ef902b6aba\") " Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.342920 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76270476-be06-47bd-88e3-18ef902b6aba-utilities\") pod \"76270476-be06-47bd-88e3-18ef902b6aba\" (UID: \"76270476-be06-47bd-88e3-18ef902b6aba\") " Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.342968 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76270476-be06-47bd-88e3-18ef902b6aba-catalog-content\") pod \"76270476-be06-47bd-88e3-18ef902b6aba\" (UID: \"76270476-be06-47bd-88e3-18ef902b6aba\") " Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.343760 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76270476-be06-47bd-88e3-18ef902b6aba-utilities" (OuterVolumeSpecName: "utilities") pod "76270476-be06-47bd-88e3-18ef902b6aba" (UID: "76270476-be06-47bd-88e3-18ef902b6aba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.350084 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76270476-be06-47bd-88e3-18ef902b6aba-kube-api-access-67z9p" (OuterVolumeSpecName: "kube-api-access-67z9p") pod "76270476-be06-47bd-88e3-18ef902b6aba" (UID: "76270476-be06-47bd-88e3-18ef902b6aba"). InnerVolumeSpecName "kube-api-access-67z9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.445451 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67z9p\" (UniqueName: \"kubernetes.io/projected/76270476-be06-47bd-88e3-18ef902b6aba-kube-api-access-67z9p\") on node \"crc\" DevicePath \"\"" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.445494 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76270476-be06-47bd-88e3-18ef902b6aba-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.467435 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76270476-be06-47bd-88e3-18ef902b6aba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76270476-be06-47bd-88e3-18ef902b6aba" (UID: "76270476-be06-47bd-88e3-18ef902b6aba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.547214 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76270476-be06-47bd-88e3-18ef902b6aba-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.677874 4751 generic.go:334] "Generic (PLEG): container finished" podID="76270476-be06-47bd-88e3-18ef902b6aba" containerID="eb612396128661ad878d40feb3d1a11fe1a6c655411da65bc7aca745b49f7c58" exitCode=0 Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.677918 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gvrx" event={"ID":"76270476-be06-47bd-88e3-18ef902b6aba","Type":"ContainerDied","Data":"eb612396128661ad878d40feb3d1a11fe1a6c655411da65bc7aca745b49f7c58"} Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.677946 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gvrx" event={"ID":"76270476-be06-47bd-88e3-18ef902b6aba","Type":"ContainerDied","Data":"14f3d4c75f54bfa75e802384c2da329bc21b119019ff68ea38121fe6f87885fa"} Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.677962 4751 scope.go:117] "RemoveContainer" containerID="eb612396128661ad878d40feb3d1a11fe1a6c655411da65bc7aca745b49f7c58" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.677975 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.716266 4751 scope.go:117] "RemoveContainer" containerID="a1fd403f46c5dd4ee5eabfee9e4a057221c22e47797fc69ab1c46dbb2103fd20" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.721612 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9gvrx"] Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.734955 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9gvrx"] Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.744581 4751 scope.go:117] "RemoveContainer" containerID="3cf9ff5dc7fec647a2a6c33a06905bfd39d55161adcd06daf1b94edbaf2b6074" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.801993 4751 scope.go:117] "RemoveContainer" containerID="eb612396128661ad878d40feb3d1a11fe1a6c655411da65bc7aca745b49f7c58" Jan 30 22:23:14 crc kubenswrapper[4751]: E0130 22:23:14.803468 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb612396128661ad878d40feb3d1a11fe1a6c655411da65bc7aca745b49f7c58\": container with ID starting with eb612396128661ad878d40feb3d1a11fe1a6c655411da65bc7aca745b49f7c58 not found: ID does not exist" containerID="eb612396128661ad878d40feb3d1a11fe1a6c655411da65bc7aca745b49f7c58" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.803511 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb612396128661ad878d40feb3d1a11fe1a6c655411da65bc7aca745b49f7c58"} err="failed to get container status \"eb612396128661ad878d40feb3d1a11fe1a6c655411da65bc7aca745b49f7c58\": rpc error: code = NotFound desc = could not find container \"eb612396128661ad878d40feb3d1a11fe1a6c655411da65bc7aca745b49f7c58\": container with ID starting with eb612396128661ad878d40feb3d1a11fe1a6c655411da65bc7aca745b49f7c58 not found: ID does not exist" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.803537 4751 scope.go:117] "RemoveContainer" containerID="a1fd403f46c5dd4ee5eabfee9e4a057221c22e47797fc69ab1c46dbb2103fd20" Jan 30 22:23:14 crc kubenswrapper[4751]: E0130 22:23:14.804871 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1fd403f46c5dd4ee5eabfee9e4a057221c22e47797fc69ab1c46dbb2103fd20\": container with ID starting with a1fd403f46c5dd4ee5eabfee9e4a057221c22e47797fc69ab1c46dbb2103fd20 not found: ID does not exist" containerID="a1fd403f46c5dd4ee5eabfee9e4a057221c22e47797fc69ab1c46dbb2103fd20" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.804914 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1fd403f46c5dd4ee5eabfee9e4a057221c22e47797fc69ab1c46dbb2103fd20"} err="failed to get container status \"a1fd403f46c5dd4ee5eabfee9e4a057221c22e47797fc69ab1c46dbb2103fd20\": rpc error: code = NotFound desc = could not find container \"a1fd403f46c5dd4ee5eabfee9e4a057221c22e47797fc69ab1c46dbb2103fd20\": container with ID starting with a1fd403f46c5dd4ee5eabfee9e4a057221c22e47797fc69ab1c46dbb2103fd20 not found: ID does not exist" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.804946 4751 scope.go:117] "RemoveContainer" containerID="3cf9ff5dc7fec647a2a6c33a06905bfd39d55161adcd06daf1b94edbaf2b6074" Jan 30 22:23:14 crc kubenswrapper[4751]: E0130 22:23:14.805840 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cf9ff5dc7fec647a2a6c33a06905bfd39d55161adcd06daf1b94edbaf2b6074\": container with ID starting with 3cf9ff5dc7fec647a2a6c33a06905bfd39d55161adcd06daf1b94edbaf2b6074 not found: ID does not exist" containerID="3cf9ff5dc7fec647a2a6c33a06905bfd39d55161adcd06daf1b94edbaf2b6074" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.806649 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cf9ff5dc7fec647a2a6c33a06905bfd39d55161adcd06daf1b94edbaf2b6074"} err="failed to get container status \"3cf9ff5dc7fec647a2a6c33a06905bfd39d55161adcd06daf1b94edbaf2b6074\": rpc error: code = NotFound desc = could not find container \"3cf9ff5dc7fec647a2a6c33a06905bfd39d55161adcd06daf1b94edbaf2b6074\": container with ID starting with 3cf9ff5dc7fec647a2a6c33a06905bfd39d55161adcd06daf1b94edbaf2b6074 not found: ID does not exist" Jan 30 22:23:15 crc kubenswrapper[4751]: I0130 22:23:15.992878 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76270476-be06-47bd-88e3-18ef902b6aba" path="/var/lib/kubelet/pods/76270476-be06-47bd-88e3-18ef902b6aba/volumes" Jan 30 22:23:17 crc kubenswrapper[4751]: I0130 22:23:17.028028 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:23:17 crc kubenswrapper[4751]: I0130 22:23:17.090907 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:23:18 crc kubenswrapper[4751]: I0130 22:23:18.065318 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9wpq7"] Jan 30 22:23:18 crc kubenswrapper[4751]: I0130 22:23:18.717008 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9wpq7" podUID="4a46f48d-5ce2-43ef-adb9-56105e6b01d3" containerName="registry-server" containerID="cri-o://dd27304293a6c3f230008f3348d34ecb3b9536b94ce4210c835c057dbdd64553" gracePeriod=2 Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.411993 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.465795 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-catalog-content\") pod \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\" (UID: \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\") " Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.465934 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-utilities\") pod \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\" (UID: \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\") " Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.466096 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxkmk\" (UniqueName: \"kubernetes.io/projected/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-kube-api-access-dxkmk\") pod \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\" (UID: \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\") " Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.466856 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-utilities" (OuterVolumeSpecName: "utilities") pod "4a46f48d-5ce2-43ef-adb9-56105e6b01d3" (UID: "4a46f48d-5ce2-43ef-adb9-56105e6b01d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.472959 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-kube-api-access-dxkmk" (OuterVolumeSpecName: "kube-api-access-dxkmk") pod "4a46f48d-5ce2-43ef-adb9-56105e6b01d3" (UID: "4a46f48d-5ce2-43ef-adb9-56105e6b01d3"). InnerVolumeSpecName "kube-api-access-dxkmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.515290 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a46f48d-5ce2-43ef-adb9-56105e6b01d3" (UID: "4a46f48d-5ce2-43ef-adb9-56105e6b01d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.569227 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.569266 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.569280 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxkmk\" (UniqueName: \"kubernetes.io/projected/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-kube-api-access-dxkmk\") on node \"crc\" DevicePath \"\"" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.729920 4751 generic.go:334] "Generic (PLEG): container finished" podID="4a46f48d-5ce2-43ef-adb9-56105e6b01d3" containerID="dd27304293a6c3f230008f3348d34ecb3b9536b94ce4210c835c057dbdd64553" exitCode=0 Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.729991 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wpq7" event={"ID":"4a46f48d-5ce2-43ef-adb9-56105e6b01d3","Type":"ContainerDied","Data":"dd27304293a6c3f230008f3348d34ecb3b9536b94ce4210c835c057dbdd64553"} Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.730020 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.730067 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wpq7" event={"ID":"4a46f48d-5ce2-43ef-adb9-56105e6b01d3","Type":"ContainerDied","Data":"65e1f65905987f1def6751e971ba08713ad56c133e5653d3a02921acceba9c27"} Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.730093 4751 scope.go:117] "RemoveContainer" containerID="dd27304293a6c3f230008f3348d34ecb3b9536b94ce4210c835c057dbdd64553" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.763887 4751 scope.go:117] "RemoveContainer" containerID="bf200d9f09320a79cc6fdbfd992fb9992198393dafe0d64983903e56f54a3616" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.792486 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9wpq7"] Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.797873 4751 scope.go:117] "RemoveContainer" containerID="3ae0ae262d04e0f7822e2dafa41890cb070543d03d786466d08b7675efcbf236" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.807403 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9wpq7"] Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.858623 4751 scope.go:117] "RemoveContainer" containerID="dd27304293a6c3f230008f3348d34ecb3b9536b94ce4210c835c057dbdd64553" Jan 30 22:23:19 crc kubenswrapper[4751]: E0130 22:23:19.859137 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd27304293a6c3f230008f3348d34ecb3b9536b94ce4210c835c057dbdd64553\": container with ID starting with dd27304293a6c3f230008f3348d34ecb3b9536b94ce4210c835c057dbdd64553 not found: ID does not exist" containerID="dd27304293a6c3f230008f3348d34ecb3b9536b94ce4210c835c057dbdd64553" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.859170 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd27304293a6c3f230008f3348d34ecb3b9536b94ce4210c835c057dbdd64553"} err="failed to get container status \"dd27304293a6c3f230008f3348d34ecb3b9536b94ce4210c835c057dbdd64553\": rpc error: code = NotFound desc = could not find container \"dd27304293a6c3f230008f3348d34ecb3b9536b94ce4210c835c057dbdd64553\": container with ID starting with dd27304293a6c3f230008f3348d34ecb3b9536b94ce4210c835c057dbdd64553 not found: ID does not exist" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.859194 4751 scope.go:117] "RemoveContainer" containerID="bf200d9f09320a79cc6fdbfd992fb9992198393dafe0d64983903e56f54a3616" Jan 30 22:23:19 crc kubenswrapper[4751]: E0130 22:23:19.859496 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf200d9f09320a79cc6fdbfd992fb9992198393dafe0d64983903e56f54a3616\": container with ID starting with bf200d9f09320a79cc6fdbfd992fb9992198393dafe0d64983903e56f54a3616 not found: ID does not exist" containerID="bf200d9f09320a79cc6fdbfd992fb9992198393dafe0d64983903e56f54a3616" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.859541 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf200d9f09320a79cc6fdbfd992fb9992198393dafe0d64983903e56f54a3616"} err="failed to get container status \"bf200d9f09320a79cc6fdbfd992fb9992198393dafe0d64983903e56f54a3616\": rpc error: code = NotFound desc = could not find container \"bf200d9f09320a79cc6fdbfd992fb9992198393dafe0d64983903e56f54a3616\": container with ID starting with bf200d9f09320a79cc6fdbfd992fb9992198393dafe0d64983903e56f54a3616 not found: ID does not exist" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.859569 4751 scope.go:117] "RemoveContainer" containerID="3ae0ae262d04e0f7822e2dafa41890cb070543d03d786466d08b7675efcbf236" Jan 30 22:23:19 crc kubenswrapper[4751]: E0130 22:23:19.859891 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ae0ae262d04e0f7822e2dafa41890cb070543d03d786466d08b7675efcbf236\": container with ID starting with 3ae0ae262d04e0f7822e2dafa41890cb070543d03d786466d08b7675efcbf236 not found: ID does not exist" containerID="3ae0ae262d04e0f7822e2dafa41890cb070543d03d786466d08b7675efcbf236" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.859925 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ae0ae262d04e0f7822e2dafa41890cb070543d03d786466d08b7675efcbf236"} err="failed to get container status \"3ae0ae262d04e0f7822e2dafa41890cb070543d03d786466d08b7675efcbf236\": rpc error: code = NotFound desc = could not find container \"3ae0ae262d04e0f7822e2dafa41890cb070543d03d786466d08b7675efcbf236\": container with ID starting with 3ae0ae262d04e0f7822e2dafa41890cb070543d03d786466d08b7675efcbf236 not found: ID does not exist" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.992429 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a46f48d-5ce2-43ef-adb9-56105e6b01d3" path="/var/lib/kubelet/pods/4a46f48d-5ce2-43ef-adb9-56105e6b01d3/volumes" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.208249 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l6xlc"] Jan 30 22:23:42 crc kubenswrapper[4751]: E0130 22:23:42.209348 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a46f48d-5ce2-43ef-adb9-56105e6b01d3" containerName="registry-server" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.209362 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a46f48d-5ce2-43ef-adb9-56105e6b01d3" containerName="registry-server" Jan 30 22:23:42 crc kubenswrapper[4751]: E0130 22:23:42.209382 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76270476-be06-47bd-88e3-18ef902b6aba" containerName="extract-content" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.209388 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="76270476-be06-47bd-88e3-18ef902b6aba" containerName="extract-content" Jan 30 22:23:42 crc kubenswrapper[4751]: E0130 22:23:42.209405 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a46f48d-5ce2-43ef-adb9-56105e6b01d3" containerName="extract-utilities" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.209411 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a46f48d-5ce2-43ef-adb9-56105e6b01d3" containerName="extract-utilities" Jan 30 22:23:42 crc kubenswrapper[4751]: E0130 22:23:42.209470 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a46f48d-5ce2-43ef-adb9-56105e6b01d3" containerName="extract-content" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.209478 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a46f48d-5ce2-43ef-adb9-56105e6b01d3" containerName="extract-content" Jan 30 22:23:42 crc kubenswrapper[4751]: E0130 22:23:42.209492 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76270476-be06-47bd-88e3-18ef902b6aba" containerName="registry-server" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.209499 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="76270476-be06-47bd-88e3-18ef902b6aba" containerName="registry-server" Jan 30 22:23:42 crc kubenswrapper[4751]: E0130 22:23:42.209512 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76270476-be06-47bd-88e3-18ef902b6aba" containerName="extract-utilities" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.209519 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="76270476-be06-47bd-88e3-18ef902b6aba" containerName="extract-utilities" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.209795 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="76270476-be06-47bd-88e3-18ef902b6aba" containerName="registry-server" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.209826 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a46f48d-5ce2-43ef-adb9-56105e6b01d3" containerName="registry-server" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.211797 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.240916 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l6xlc"] Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.372812 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh94h\" (UniqueName: \"kubernetes.io/projected/4703e2e6-6343-4584-825f-4c35818f3cbd-kube-api-access-vh94h\") pod \"community-operators-l6xlc\" (UID: \"4703e2e6-6343-4584-825f-4c35818f3cbd\") " pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.373162 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4703e2e6-6343-4584-825f-4c35818f3cbd-catalog-content\") pod \"community-operators-l6xlc\" (UID: \"4703e2e6-6343-4584-825f-4c35818f3cbd\") " pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.373236 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4703e2e6-6343-4584-825f-4c35818f3cbd-utilities\") pod \"community-operators-l6xlc\" (UID: \"4703e2e6-6343-4584-825f-4c35818f3cbd\") " pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.475753 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4703e2e6-6343-4584-825f-4c35818f3cbd-catalog-content\") pod \"community-operators-l6xlc\" (UID: \"4703e2e6-6343-4584-825f-4c35818f3cbd\") " pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.475803 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4703e2e6-6343-4584-825f-4c35818f3cbd-utilities\") pod \"community-operators-l6xlc\" (UID: \"4703e2e6-6343-4584-825f-4c35818f3cbd\") " pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.475958 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh94h\" (UniqueName: \"kubernetes.io/projected/4703e2e6-6343-4584-825f-4c35818f3cbd-kube-api-access-vh94h\") pod \"community-operators-l6xlc\" (UID: \"4703e2e6-6343-4584-825f-4c35818f3cbd\") " pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.476178 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4703e2e6-6343-4584-825f-4c35818f3cbd-catalog-content\") pod \"community-operators-l6xlc\" (UID: \"4703e2e6-6343-4584-825f-4c35818f3cbd\") " pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.476246 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4703e2e6-6343-4584-825f-4c35818f3cbd-utilities\") pod \"community-operators-l6xlc\" (UID: \"4703e2e6-6343-4584-825f-4c35818f3cbd\") " pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.508285 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh94h\" (UniqueName: \"kubernetes.io/projected/4703e2e6-6343-4584-825f-4c35818f3cbd-kube-api-access-vh94h\") pod \"community-operators-l6xlc\" (UID: \"4703e2e6-6343-4584-825f-4c35818f3cbd\") " pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.542907 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:43 crc kubenswrapper[4751]: I0130 22:23:43.104823 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l6xlc"] Jan 30 22:23:43 crc kubenswrapper[4751]: I0130 22:23:43.992176 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6xlc" event={"ID":"4703e2e6-6343-4584-825f-4c35818f3cbd","Type":"ContainerStarted","Data":"1fc9ade8a4dcb419db24ca408762269ef02b2c451cd8ed1921dd4a1b1bba5944"} Jan 30 22:23:43 crc kubenswrapper[4751]: I0130 22:23:43.992786 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6xlc" event={"ID":"4703e2e6-6343-4584-825f-4c35818f3cbd","Type":"ContainerStarted","Data":"aa511afe192cabc0b934a576e66e50e45b1455e28592efbbc96e86b29fa41338"} Jan 30 22:23:45 crc kubenswrapper[4751]: I0130 22:23:45.017051 4751 generic.go:334] "Generic (PLEG): container finished" podID="4703e2e6-6343-4584-825f-4c35818f3cbd" containerID="1fc9ade8a4dcb419db24ca408762269ef02b2c451cd8ed1921dd4a1b1bba5944" exitCode=0 Jan 30 22:23:45 crc kubenswrapper[4751]: I0130 22:23:45.017650 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6xlc" event={"ID":"4703e2e6-6343-4584-825f-4c35818f3cbd","Type":"ContainerDied","Data":"1fc9ade8a4dcb419db24ca408762269ef02b2c451cd8ed1921dd4a1b1bba5944"} Jan 30 22:23:47 crc kubenswrapper[4751]: I0130 22:23:47.045891 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6xlc" event={"ID":"4703e2e6-6343-4584-825f-4c35818f3cbd","Type":"ContainerStarted","Data":"0050840646a31670c2fa94c5002fe700937f65a9e9671334db09df514055aa25"} Jan 30 22:23:48 crc kubenswrapper[4751]: I0130 22:23:48.057914 4751 generic.go:334] "Generic (PLEG): container finished" podID="4703e2e6-6343-4584-825f-4c35818f3cbd" containerID="0050840646a31670c2fa94c5002fe700937f65a9e9671334db09df514055aa25" exitCode=0 Jan 30 22:23:48 crc kubenswrapper[4751]: I0130 22:23:48.057981 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6xlc" event={"ID":"4703e2e6-6343-4584-825f-4c35818f3cbd","Type":"ContainerDied","Data":"0050840646a31670c2fa94c5002fe700937f65a9e9671334db09df514055aa25"} Jan 30 22:23:49 crc kubenswrapper[4751]: I0130 22:23:49.072572 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6xlc" event={"ID":"4703e2e6-6343-4584-825f-4c35818f3cbd","Type":"ContainerStarted","Data":"7d7eb2b7990fd2e8a5df9c479d070e5720f5579b2efd5f5c15ba843eb3fcf440"} Jan 30 22:23:49 crc kubenswrapper[4751]: I0130 22:23:49.090723 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l6xlc" podStartSLOduration=3.60181165 podStartE2EDuration="7.090707309s" podCreationTimestamp="2026-01-30 22:23:42 +0000 UTC" firstStartedPulling="2026-01-30 22:23:45.022486558 +0000 UTC m=+4163.768309197" lastFinishedPulling="2026-01-30 22:23:48.511382217 +0000 UTC m=+4167.257204856" observedRunningTime="2026-01-30 22:23:49.088477128 +0000 UTC m=+4167.834299777" watchObservedRunningTime="2026-01-30 22:23:49.090707309 +0000 UTC m=+4167.836529958" Jan 30 22:23:52 crc kubenswrapper[4751]: I0130 22:23:52.543998 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:52 crc kubenswrapper[4751]: I0130 22:23:52.544651 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:52 crc kubenswrapper[4751]: I0130 22:23:52.592033 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:53 crc kubenswrapper[4751]: I0130 22:23:53.473750 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:53 crc kubenswrapper[4751]: I0130 22:23:53.558677 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l6xlc"] Jan 30 22:23:55 crc kubenswrapper[4751]: I0130 22:23:55.138966 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l6xlc" podUID="4703e2e6-6343-4584-825f-4c35818f3cbd" containerName="registry-server" containerID="cri-o://7d7eb2b7990fd2e8a5df9c479d070e5720f5579b2efd5f5c15ba843eb3fcf440" gracePeriod=2 Jan 30 22:23:55 crc kubenswrapper[4751]: I0130 22:23:55.685031 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:55 crc kubenswrapper[4751]: I0130 22:23:55.842889 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4703e2e6-6343-4584-825f-4c35818f3cbd-utilities\") pod \"4703e2e6-6343-4584-825f-4c35818f3cbd\" (UID: \"4703e2e6-6343-4584-825f-4c35818f3cbd\") " Jan 30 22:23:55 crc kubenswrapper[4751]: I0130 22:23:55.843095 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh94h\" (UniqueName: \"kubernetes.io/projected/4703e2e6-6343-4584-825f-4c35818f3cbd-kube-api-access-vh94h\") pod \"4703e2e6-6343-4584-825f-4c35818f3cbd\" (UID: \"4703e2e6-6343-4584-825f-4c35818f3cbd\") " Jan 30 22:23:55 crc kubenswrapper[4751]: I0130 22:23:55.843158 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4703e2e6-6343-4584-825f-4c35818f3cbd-catalog-content\") pod \"4703e2e6-6343-4584-825f-4c35818f3cbd\" (UID: \"4703e2e6-6343-4584-825f-4c35818f3cbd\") " Jan 30 22:23:55 crc kubenswrapper[4751]: I0130 22:23:55.844151 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4703e2e6-6343-4584-825f-4c35818f3cbd-utilities" (OuterVolumeSpecName: "utilities") pod "4703e2e6-6343-4584-825f-4c35818f3cbd" (UID: "4703e2e6-6343-4584-825f-4c35818f3cbd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:23:55 crc kubenswrapper[4751]: I0130 22:23:55.856923 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4703e2e6-6343-4584-825f-4c35818f3cbd-kube-api-access-vh94h" (OuterVolumeSpecName: "kube-api-access-vh94h") pod "4703e2e6-6343-4584-825f-4c35818f3cbd" (UID: "4703e2e6-6343-4584-825f-4c35818f3cbd"). InnerVolumeSpecName "kube-api-access-vh94h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:23:55 crc kubenswrapper[4751]: I0130 22:23:55.905398 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4703e2e6-6343-4584-825f-4c35818f3cbd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4703e2e6-6343-4584-825f-4c35818f3cbd" (UID: "4703e2e6-6343-4584-825f-4c35818f3cbd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:23:55 crc kubenswrapper[4751]: I0130 22:23:55.946416 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4703e2e6-6343-4584-825f-4c35818f3cbd-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:23:55 crc kubenswrapper[4751]: I0130 22:23:55.946475 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh94h\" (UniqueName: \"kubernetes.io/projected/4703e2e6-6343-4584-825f-4c35818f3cbd-kube-api-access-vh94h\") on node \"crc\" DevicePath \"\"" Jan 30 22:23:55 crc kubenswrapper[4751]: I0130 22:23:55.946526 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4703e2e6-6343-4584-825f-4c35818f3cbd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.154387 4751 generic.go:334] "Generic (PLEG): container finished" podID="4703e2e6-6343-4584-825f-4c35818f3cbd" containerID="7d7eb2b7990fd2e8a5df9c479d070e5720f5579b2efd5f5c15ba843eb3fcf440" exitCode=0 Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.154438 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6xlc" event={"ID":"4703e2e6-6343-4584-825f-4c35818f3cbd","Type":"ContainerDied","Data":"7d7eb2b7990fd2e8a5df9c479d070e5720f5579b2efd5f5c15ba843eb3fcf440"} Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.154472 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6xlc" event={"ID":"4703e2e6-6343-4584-825f-4c35818f3cbd","Type":"ContainerDied","Data":"aa511afe192cabc0b934a576e66e50e45b1455e28592efbbc96e86b29fa41338"} Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.154494 4751 scope.go:117] "RemoveContainer" containerID="7d7eb2b7990fd2e8a5df9c479d070e5720f5579b2efd5f5c15ba843eb3fcf440" Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.154653 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.188340 4751 scope.go:117] "RemoveContainer" containerID="0050840646a31670c2fa94c5002fe700937f65a9e9671334db09df514055aa25" Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.191369 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l6xlc"] Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.203071 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l6xlc"] Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.211860 4751 scope.go:117] "RemoveContainer" containerID="1fc9ade8a4dcb419db24ca408762269ef02b2c451cd8ed1921dd4a1b1bba5944" Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.259858 4751 scope.go:117] "RemoveContainer" containerID="7d7eb2b7990fd2e8a5df9c479d070e5720f5579b2efd5f5c15ba843eb3fcf440" Jan 30 22:23:56 crc kubenswrapper[4751]: E0130 22:23:56.260370 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d7eb2b7990fd2e8a5df9c479d070e5720f5579b2efd5f5c15ba843eb3fcf440\": container with ID starting with 7d7eb2b7990fd2e8a5df9c479d070e5720f5579b2efd5f5c15ba843eb3fcf440 not found: ID does not exist" containerID="7d7eb2b7990fd2e8a5df9c479d070e5720f5579b2efd5f5c15ba843eb3fcf440" Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.260407 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d7eb2b7990fd2e8a5df9c479d070e5720f5579b2efd5f5c15ba843eb3fcf440"} err="failed to get container status \"7d7eb2b7990fd2e8a5df9c479d070e5720f5579b2efd5f5c15ba843eb3fcf440\": rpc error: code = NotFound desc = could not find container \"7d7eb2b7990fd2e8a5df9c479d070e5720f5579b2efd5f5c15ba843eb3fcf440\": container with ID starting with 7d7eb2b7990fd2e8a5df9c479d070e5720f5579b2efd5f5c15ba843eb3fcf440 not found: ID does not exist" Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.260434 4751 scope.go:117] "RemoveContainer" containerID="0050840646a31670c2fa94c5002fe700937f65a9e9671334db09df514055aa25" Jan 30 22:23:56 crc kubenswrapper[4751]: E0130 22:23:56.260758 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0050840646a31670c2fa94c5002fe700937f65a9e9671334db09df514055aa25\": container with ID starting with 0050840646a31670c2fa94c5002fe700937f65a9e9671334db09df514055aa25 not found: ID does not exist" containerID="0050840646a31670c2fa94c5002fe700937f65a9e9671334db09df514055aa25" Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.260835 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0050840646a31670c2fa94c5002fe700937f65a9e9671334db09df514055aa25"} err="failed to get container status \"0050840646a31670c2fa94c5002fe700937f65a9e9671334db09df514055aa25\": rpc error: code = NotFound desc = could not find container \"0050840646a31670c2fa94c5002fe700937f65a9e9671334db09df514055aa25\": container with ID starting with 0050840646a31670c2fa94c5002fe700937f65a9e9671334db09df514055aa25 not found: ID does not exist" Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.260885 4751 scope.go:117] "RemoveContainer" containerID="1fc9ade8a4dcb419db24ca408762269ef02b2c451cd8ed1921dd4a1b1bba5944" Jan 30 22:23:56 crc kubenswrapper[4751]: E0130 22:23:56.261689 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fc9ade8a4dcb419db24ca408762269ef02b2c451cd8ed1921dd4a1b1bba5944\": container with ID starting with 1fc9ade8a4dcb419db24ca408762269ef02b2c451cd8ed1921dd4a1b1bba5944 not found: ID does not exist" containerID="1fc9ade8a4dcb419db24ca408762269ef02b2c451cd8ed1921dd4a1b1bba5944" Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.261713 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fc9ade8a4dcb419db24ca408762269ef02b2c451cd8ed1921dd4a1b1bba5944"} err="failed to get container status \"1fc9ade8a4dcb419db24ca408762269ef02b2c451cd8ed1921dd4a1b1bba5944\": rpc error: code = NotFound desc = could not find container \"1fc9ade8a4dcb419db24ca408762269ef02b2c451cd8ed1921dd4a1b1bba5944\": container with ID starting with 1fc9ade8a4dcb419db24ca408762269ef02b2c451cd8ed1921dd4a1b1bba5944 not found: ID does not exist" Jan 30 22:23:57 crc kubenswrapper[4751]: I0130 22:23:57.987630 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4703e2e6-6343-4584-825f-4c35818f3cbd" path="/var/lib/kubelet/pods/4703e2e6-6343-4584-825f-4c35818f3cbd/volumes" Jan 30 22:24:24 crc kubenswrapper[4751]: I0130 22:24:24.126989 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:24:24 crc kubenswrapper[4751]: I0130 22:24:24.127623 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:24:35 crc kubenswrapper[4751]: I0130 22:24:35.802824 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n927t"] Jan 30 22:24:35 crc kubenswrapper[4751]: E0130 22:24:35.805039 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4703e2e6-6343-4584-825f-4c35818f3cbd" containerName="registry-server" Jan 30 22:24:35 crc kubenswrapper[4751]: I0130 22:24:35.805064 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4703e2e6-6343-4584-825f-4c35818f3cbd" containerName="registry-server" Jan 30 22:24:35 crc kubenswrapper[4751]: E0130 22:24:35.805093 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4703e2e6-6343-4584-825f-4c35818f3cbd" containerName="extract-content" Jan 30 22:24:35 crc kubenswrapper[4751]: I0130 22:24:35.805103 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4703e2e6-6343-4584-825f-4c35818f3cbd" containerName="extract-content" Jan 30 22:24:35 crc kubenswrapper[4751]: E0130 22:24:35.805154 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4703e2e6-6343-4584-825f-4c35818f3cbd" containerName="extract-utilities" Jan 30 22:24:35 crc kubenswrapper[4751]: I0130 22:24:35.805165 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4703e2e6-6343-4584-825f-4c35818f3cbd" containerName="extract-utilities" Jan 30 22:24:35 crc kubenswrapper[4751]: I0130 22:24:35.805471 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="4703e2e6-6343-4584-825f-4c35818f3cbd" containerName="registry-server" Jan 30 22:24:35 crc kubenswrapper[4751]: I0130 22:24:35.807552 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:35 crc kubenswrapper[4751]: I0130 22:24:35.824209 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n927t"] Jan 30 22:24:35 crc kubenswrapper[4751]: I0130 22:24:35.955469 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56dff38b-859f-48c6-8b01-42dfaf948555-catalog-content\") pod \"redhat-marketplace-n927t\" (UID: \"56dff38b-859f-48c6-8b01-42dfaf948555\") " pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:35 crc kubenswrapper[4751]: I0130 22:24:35.956085 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56dff38b-859f-48c6-8b01-42dfaf948555-utilities\") pod \"redhat-marketplace-n927t\" (UID: \"56dff38b-859f-48c6-8b01-42dfaf948555\") " pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:35 crc kubenswrapper[4751]: I0130 22:24:35.956243 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4xp8\" (UniqueName: \"kubernetes.io/projected/56dff38b-859f-48c6-8b01-42dfaf948555-kube-api-access-v4xp8\") pod \"redhat-marketplace-n927t\" (UID: \"56dff38b-859f-48c6-8b01-42dfaf948555\") " pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:36 crc kubenswrapper[4751]: I0130 22:24:36.059176 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56dff38b-859f-48c6-8b01-42dfaf948555-catalog-content\") pod \"redhat-marketplace-n927t\" (UID: \"56dff38b-859f-48c6-8b01-42dfaf948555\") " pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:36 crc kubenswrapper[4751]: I0130 22:24:36.059362 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56dff38b-859f-48c6-8b01-42dfaf948555-utilities\") pod \"redhat-marketplace-n927t\" (UID: \"56dff38b-859f-48c6-8b01-42dfaf948555\") " pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:36 crc kubenswrapper[4751]: I0130 22:24:36.059457 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4xp8\" (UniqueName: \"kubernetes.io/projected/56dff38b-859f-48c6-8b01-42dfaf948555-kube-api-access-v4xp8\") pod \"redhat-marketplace-n927t\" (UID: \"56dff38b-859f-48c6-8b01-42dfaf948555\") " pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:36 crc kubenswrapper[4751]: I0130 22:24:36.060202 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56dff38b-859f-48c6-8b01-42dfaf948555-catalog-content\") pod \"redhat-marketplace-n927t\" (UID: \"56dff38b-859f-48c6-8b01-42dfaf948555\") " pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:36 crc kubenswrapper[4751]: I0130 22:24:36.060361 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56dff38b-859f-48c6-8b01-42dfaf948555-utilities\") pod \"redhat-marketplace-n927t\" (UID: \"56dff38b-859f-48c6-8b01-42dfaf948555\") " pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:36 crc kubenswrapper[4751]: I0130 22:24:36.088475 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4xp8\" (UniqueName: \"kubernetes.io/projected/56dff38b-859f-48c6-8b01-42dfaf948555-kube-api-access-v4xp8\") pod \"redhat-marketplace-n927t\" (UID: \"56dff38b-859f-48c6-8b01-42dfaf948555\") " pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:36 crc kubenswrapper[4751]: I0130 22:24:36.132168 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:36 crc kubenswrapper[4751]: I0130 22:24:36.694177 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n927t"] Jan 30 22:24:37 crc kubenswrapper[4751]: I0130 22:24:37.602987 4751 generic.go:334] "Generic (PLEG): container finished" podID="56dff38b-859f-48c6-8b01-42dfaf948555" containerID="4567d845f69832e0d45827c27b631b5fd4466f063519f839ed57e88a1d72e573" exitCode=0 Jan 30 22:24:37 crc kubenswrapper[4751]: I0130 22:24:37.603106 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n927t" event={"ID":"56dff38b-859f-48c6-8b01-42dfaf948555","Type":"ContainerDied","Data":"4567d845f69832e0d45827c27b631b5fd4466f063519f839ed57e88a1d72e573"} Jan 30 22:24:37 crc kubenswrapper[4751]: I0130 22:24:37.603352 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n927t" event={"ID":"56dff38b-859f-48c6-8b01-42dfaf948555","Type":"ContainerStarted","Data":"9d33d52f32a6940b5421d9ec5d883dcef165816d855dde7fe6395ca3cff7a153"} Jan 30 22:24:39 crc kubenswrapper[4751]: I0130 22:24:39.630097 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n927t" event={"ID":"56dff38b-859f-48c6-8b01-42dfaf948555","Type":"ContainerStarted","Data":"57ffa68628c029a3bdcb64f02c8e9c038f9002e18cb2cdf74eab05226d0a95e1"} Jan 30 22:24:40 crc kubenswrapper[4751]: I0130 22:24:40.640217 4751 generic.go:334] "Generic (PLEG): container finished" podID="56dff38b-859f-48c6-8b01-42dfaf948555" containerID="57ffa68628c029a3bdcb64f02c8e9c038f9002e18cb2cdf74eab05226d0a95e1" exitCode=0 Jan 30 22:24:40 crc kubenswrapper[4751]: I0130 22:24:40.640348 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n927t" event={"ID":"56dff38b-859f-48c6-8b01-42dfaf948555","Type":"ContainerDied","Data":"57ffa68628c029a3bdcb64f02c8e9c038f9002e18cb2cdf74eab05226d0a95e1"} Jan 30 22:24:41 crc kubenswrapper[4751]: I0130 22:24:41.652747 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n927t" event={"ID":"56dff38b-859f-48c6-8b01-42dfaf948555","Type":"ContainerStarted","Data":"b55ff9fc70706f931e0e7fa101f2360fa8e1fdf7858db58014c9d3df8fac5233"} Jan 30 22:24:41 crc kubenswrapper[4751]: I0130 22:24:41.682374 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n927t" podStartSLOduration=3.194051694 podStartE2EDuration="6.682353627s" podCreationTimestamp="2026-01-30 22:24:35 +0000 UTC" firstStartedPulling="2026-01-30 22:24:37.604999808 +0000 UTC m=+4216.350822457" lastFinishedPulling="2026-01-30 22:24:41.093301741 +0000 UTC m=+4219.839124390" observedRunningTime="2026-01-30 22:24:41.671162123 +0000 UTC m=+4220.416984772" watchObservedRunningTime="2026-01-30 22:24:41.682353627 +0000 UTC m=+4220.428176296" Jan 30 22:24:46 crc kubenswrapper[4751]: I0130 22:24:46.132529 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:46 crc kubenswrapper[4751]: I0130 22:24:46.133234 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:46 crc kubenswrapper[4751]: I0130 22:24:46.202716 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:47 crc kubenswrapper[4751]: I0130 22:24:47.624996 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:47 crc kubenswrapper[4751]: I0130 22:24:47.677548 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n927t"] Jan 30 22:24:48 crc kubenswrapper[4751]: I0130 22:24:48.720615 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n927t" podUID="56dff38b-859f-48c6-8b01-42dfaf948555" containerName="registry-server" containerID="cri-o://b55ff9fc70706f931e0e7fa101f2360fa8e1fdf7858db58014c9d3df8fac5233" gracePeriod=2 Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.606828 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.696567 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56dff38b-859f-48c6-8b01-42dfaf948555-catalog-content\") pod \"56dff38b-859f-48c6-8b01-42dfaf948555\" (UID: \"56dff38b-859f-48c6-8b01-42dfaf948555\") " Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.696910 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56dff38b-859f-48c6-8b01-42dfaf948555-utilities\") pod \"56dff38b-859f-48c6-8b01-42dfaf948555\" (UID: \"56dff38b-859f-48c6-8b01-42dfaf948555\") " Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.696987 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4xp8\" (UniqueName: \"kubernetes.io/projected/56dff38b-859f-48c6-8b01-42dfaf948555-kube-api-access-v4xp8\") pod \"56dff38b-859f-48c6-8b01-42dfaf948555\" (UID: \"56dff38b-859f-48c6-8b01-42dfaf948555\") " Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.697879 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56dff38b-859f-48c6-8b01-42dfaf948555-utilities" (OuterVolumeSpecName: "utilities") pod "56dff38b-859f-48c6-8b01-42dfaf948555" (UID: "56dff38b-859f-48c6-8b01-42dfaf948555"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.703398 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56dff38b-859f-48c6-8b01-42dfaf948555-kube-api-access-v4xp8" (OuterVolumeSpecName: "kube-api-access-v4xp8") pod "56dff38b-859f-48c6-8b01-42dfaf948555" (UID: "56dff38b-859f-48c6-8b01-42dfaf948555"). InnerVolumeSpecName "kube-api-access-v4xp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.736159 4751 generic.go:334] "Generic (PLEG): container finished" podID="56dff38b-859f-48c6-8b01-42dfaf948555" containerID="b55ff9fc70706f931e0e7fa101f2360fa8e1fdf7858db58014c9d3df8fac5233" exitCode=0 Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.736217 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n927t" event={"ID":"56dff38b-859f-48c6-8b01-42dfaf948555","Type":"ContainerDied","Data":"b55ff9fc70706f931e0e7fa101f2360fa8e1fdf7858db58014c9d3df8fac5233"} Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.736252 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n927t" event={"ID":"56dff38b-859f-48c6-8b01-42dfaf948555","Type":"ContainerDied","Data":"9d33d52f32a6940b5421d9ec5d883dcef165816d855dde7fe6395ca3cff7a153"} Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.736272 4751 scope.go:117] "RemoveContainer" containerID="b55ff9fc70706f931e0e7fa101f2360fa8e1fdf7858db58014c9d3df8fac5233" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.736291 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.787674 4751 scope.go:117] "RemoveContainer" containerID="57ffa68628c029a3bdcb64f02c8e9c038f9002e18cb2cdf74eab05226d0a95e1" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.800344 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56dff38b-859f-48c6-8b01-42dfaf948555-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.800629 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4xp8\" (UniqueName: \"kubernetes.io/projected/56dff38b-859f-48c6-8b01-42dfaf948555-kube-api-access-v4xp8\") on node \"crc\" DevicePath \"\"" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.827497 4751 scope.go:117] "RemoveContainer" containerID="4567d845f69832e0d45827c27b631b5fd4466f063519f839ed57e88a1d72e573" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.890362 4751 scope.go:117] "RemoveContainer" containerID="b55ff9fc70706f931e0e7fa101f2360fa8e1fdf7858db58014c9d3df8fac5233" Jan 30 22:24:49 crc kubenswrapper[4751]: E0130 22:24:49.890883 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b55ff9fc70706f931e0e7fa101f2360fa8e1fdf7858db58014c9d3df8fac5233\": container with ID starting with b55ff9fc70706f931e0e7fa101f2360fa8e1fdf7858db58014c9d3df8fac5233 not found: ID does not exist" containerID="b55ff9fc70706f931e0e7fa101f2360fa8e1fdf7858db58014c9d3df8fac5233" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.890932 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b55ff9fc70706f931e0e7fa101f2360fa8e1fdf7858db58014c9d3df8fac5233"} err="failed to get container status \"b55ff9fc70706f931e0e7fa101f2360fa8e1fdf7858db58014c9d3df8fac5233\": rpc error: code = NotFound desc = could not find container \"b55ff9fc70706f931e0e7fa101f2360fa8e1fdf7858db58014c9d3df8fac5233\": container with ID starting with b55ff9fc70706f931e0e7fa101f2360fa8e1fdf7858db58014c9d3df8fac5233 not found: ID does not exist" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.890960 4751 scope.go:117] "RemoveContainer" containerID="57ffa68628c029a3bdcb64f02c8e9c038f9002e18cb2cdf74eab05226d0a95e1" Jan 30 22:24:49 crc kubenswrapper[4751]: E0130 22:24:49.891289 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57ffa68628c029a3bdcb64f02c8e9c038f9002e18cb2cdf74eab05226d0a95e1\": container with ID starting with 57ffa68628c029a3bdcb64f02c8e9c038f9002e18cb2cdf74eab05226d0a95e1 not found: ID does not exist" containerID="57ffa68628c029a3bdcb64f02c8e9c038f9002e18cb2cdf74eab05226d0a95e1" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.891348 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57ffa68628c029a3bdcb64f02c8e9c038f9002e18cb2cdf74eab05226d0a95e1"} err="failed to get container status \"57ffa68628c029a3bdcb64f02c8e9c038f9002e18cb2cdf74eab05226d0a95e1\": rpc error: code = NotFound desc = could not find container \"57ffa68628c029a3bdcb64f02c8e9c038f9002e18cb2cdf74eab05226d0a95e1\": container with ID starting with 57ffa68628c029a3bdcb64f02c8e9c038f9002e18cb2cdf74eab05226d0a95e1 not found: ID does not exist" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.891379 4751 scope.go:117] "RemoveContainer" containerID="4567d845f69832e0d45827c27b631b5fd4466f063519f839ed57e88a1d72e573" Jan 30 22:24:49 crc kubenswrapper[4751]: E0130 22:24:49.891714 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4567d845f69832e0d45827c27b631b5fd4466f063519f839ed57e88a1d72e573\": container with ID starting with 4567d845f69832e0d45827c27b631b5fd4466f063519f839ed57e88a1d72e573 not found: ID does not exist" containerID="4567d845f69832e0d45827c27b631b5fd4466f063519f839ed57e88a1d72e573" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.891744 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4567d845f69832e0d45827c27b631b5fd4466f063519f839ed57e88a1d72e573"} err="failed to get container status \"4567d845f69832e0d45827c27b631b5fd4466f063519f839ed57e88a1d72e573\": rpc error: code = NotFound desc = could not find container \"4567d845f69832e0d45827c27b631b5fd4466f063519f839ed57e88a1d72e573\": container with ID starting with 4567d845f69832e0d45827c27b631b5fd4466f063519f839ed57e88a1d72e573 not found: ID does not exist" Jan 30 22:24:50 crc kubenswrapper[4751]: I0130 22:24:50.075614 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56dff38b-859f-48c6-8b01-42dfaf948555-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56dff38b-859f-48c6-8b01-42dfaf948555" (UID: "56dff38b-859f-48c6-8b01-42dfaf948555"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:24:50 crc kubenswrapper[4751]: I0130 22:24:50.107690 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56dff38b-859f-48c6-8b01-42dfaf948555-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:24:50 crc kubenswrapper[4751]: I0130 22:24:50.373703 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n927t"] Jan 30 22:24:50 crc kubenswrapper[4751]: I0130 22:24:50.385773 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n927t"] Jan 30 22:24:52 crc kubenswrapper[4751]: I0130 22:24:52.008986 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56dff38b-859f-48c6-8b01-42dfaf948555" path="/var/lib/kubelet/pods/56dff38b-859f-48c6-8b01-42dfaf948555/volumes" Jan 30 22:24:54 crc kubenswrapper[4751]: I0130 22:24:54.127213 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:24:54 crc kubenswrapper[4751]: I0130 22:24:54.127812 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:25:24 crc kubenswrapper[4751]: I0130 22:25:24.126883 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:25:24 crc kubenswrapper[4751]: I0130 22:25:24.127465 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:25:24 crc kubenswrapper[4751]: I0130 22:25:24.127530 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 22:25:24 crc kubenswrapper[4751]: I0130 22:25:24.128397 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:25:24 crc kubenswrapper[4751]: I0130 22:25:24.128452 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" gracePeriod=600 Jan 30 22:25:24 crc kubenswrapper[4751]: E0130 22:25:24.255407 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:25:25 crc kubenswrapper[4751]: I0130 22:25:25.146741 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" exitCode=0 Jan 30 22:25:25 crc kubenswrapper[4751]: I0130 22:25:25.146812 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3"} Jan 30 22:25:25 crc kubenswrapper[4751]: I0130 22:25:25.147119 4751 scope.go:117] "RemoveContainer" containerID="efd99c7f1a974f0acdc1ce10091a0b2ee7636478bf31291cff8918dfb9474170" Jan 30 22:25:25 crc kubenswrapper[4751]: I0130 22:25:25.147962 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:25:25 crc kubenswrapper[4751]: E0130 22:25:25.148318 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:25:38 crc kubenswrapper[4751]: I0130 22:25:38.976092 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:25:38 crc kubenswrapper[4751]: E0130 22:25:38.977071 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:25:51 crc kubenswrapper[4751]: I0130 22:25:51.983934 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:25:51 crc kubenswrapper[4751]: E0130 22:25:51.984740 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:26:02 crc kubenswrapper[4751]: I0130 22:26:02.977532 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:26:02 crc kubenswrapper[4751]: E0130 22:26:02.978491 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:26:15 crc kubenswrapper[4751]: I0130 22:26:15.976419 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:26:15 crc kubenswrapper[4751]: E0130 22:26:15.977160 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:26:28 crc kubenswrapper[4751]: I0130 22:26:28.976468 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:26:28 crc kubenswrapper[4751]: E0130 22:26:28.977342 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:26:40 crc kubenswrapper[4751]: I0130 22:26:40.976631 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:26:40 crc kubenswrapper[4751]: E0130 22:26:40.977630 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:26:51 crc kubenswrapper[4751]: I0130 22:26:51.976523 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:26:51 crc kubenswrapper[4751]: E0130 22:26:51.977326 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:27:03 crc kubenswrapper[4751]: I0130 22:27:03.976747 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:27:03 crc kubenswrapper[4751]: E0130 22:27:03.977628 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:27:17 crc kubenswrapper[4751]: I0130 22:27:17.977809 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:27:17 crc kubenswrapper[4751]: E0130 22:27:17.978614 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:27:22 crc kubenswrapper[4751]: E0130 22:27:22.291056 4751 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.39:51282->38.102.83.39:41127: write tcp 38.102.83.39:51282->38.102.83.39:41127: write: broken pipe Jan 30 22:27:32 crc kubenswrapper[4751]: I0130 22:27:32.976191 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:27:32 crc kubenswrapper[4751]: E0130 22:27:32.977126 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:27:45 crc kubenswrapper[4751]: I0130 22:27:45.977136 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:27:45 crc kubenswrapper[4751]: E0130 22:27:45.977843 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:27:56 crc kubenswrapper[4751]: I0130 22:27:56.976376 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:27:56 crc kubenswrapper[4751]: E0130 22:27:56.977306 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:28:07 crc kubenswrapper[4751]: I0130 22:28:07.977363 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:28:07 crc kubenswrapper[4751]: E0130 22:28:07.978311 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:28:18 crc kubenswrapper[4751]: I0130 22:28:18.975962 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:28:18 crc kubenswrapper[4751]: E0130 22:28:18.976665 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:28:29 crc kubenswrapper[4751]: I0130 22:28:29.978317 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:28:29 crc kubenswrapper[4751]: E0130 22:28:29.979644 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:28:40 crc kubenswrapper[4751]: I0130 22:28:40.977208 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:28:40 crc kubenswrapper[4751]: E0130 22:28:40.978363 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:28:52 crc kubenswrapper[4751]: I0130 22:28:52.975943 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:28:52 crc kubenswrapper[4751]: E0130 22:28:52.976995 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:29:04 crc kubenswrapper[4751]: I0130 22:29:04.975761 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:29:04 crc kubenswrapper[4751]: E0130 22:29:04.976686 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:29:15 crc kubenswrapper[4751]: I0130 22:29:15.976240 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:29:15 crc kubenswrapper[4751]: E0130 22:29:15.977013 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:29:26 crc kubenswrapper[4751]: I0130 22:29:26.976425 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:29:26 crc kubenswrapper[4751]: E0130 22:29:26.977268 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:29:38 crc kubenswrapper[4751]: I0130 22:29:38.976417 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:29:38 crc kubenswrapper[4751]: E0130 22:29:38.977353 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:29:52 crc kubenswrapper[4751]: I0130 22:29:52.977139 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:29:52 crc kubenswrapper[4751]: E0130 22:29:52.977998 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.159025 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl"] Jan 30 22:30:00 crc kubenswrapper[4751]: E0130 22:30:00.161165 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56dff38b-859f-48c6-8b01-42dfaf948555" containerName="extract-content" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.161180 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="56dff38b-859f-48c6-8b01-42dfaf948555" containerName="extract-content" Jan 30 22:30:00 crc kubenswrapper[4751]: E0130 22:30:00.161205 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56dff38b-859f-48c6-8b01-42dfaf948555" containerName="registry-server" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.161211 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="56dff38b-859f-48c6-8b01-42dfaf948555" containerName="registry-server" Jan 30 22:30:00 crc kubenswrapper[4751]: E0130 22:30:00.161243 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56dff38b-859f-48c6-8b01-42dfaf948555" containerName="extract-utilities" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.161249 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="56dff38b-859f-48c6-8b01-42dfaf948555" containerName="extract-utilities" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.161520 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="56dff38b-859f-48c6-8b01-42dfaf948555" containerName="registry-server" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.162403 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.164920 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.178259 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.178616 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl"] Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.205475 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8js4l\" (UniqueName: \"kubernetes.io/projected/a015a029-77ef-48b8-870d-c6e5381cbbbf-kube-api-access-8js4l\") pod \"collect-profiles-29496870-c72gl\" (UID: \"a015a029-77ef-48b8-870d-c6e5381cbbbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.205533 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a015a029-77ef-48b8-870d-c6e5381cbbbf-config-volume\") pod \"collect-profiles-29496870-c72gl\" (UID: \"a015a029-77ef-48b8-870d-c6e5381cbbbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.205795 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a015a029-77ef-48b8-870d-c6e5381cbbbf-secret-volume\") pod \"collect-profiles-29496870-c72gl\" (UID: \"a015a029-77ef-48b8-870d-c6e5381cbbbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.307829 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a015a029-77ef-48b8-870d-c6e5381cbbbf-secret-volume\") pod \"collect-profiles-29496870-c72gl\" (UID: \"a015a029-77ef-48b8-870d-c6e5381cbbbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.308280 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8js4l\" (UniqueName: \"kubernetes.io/projected/a015a029-77ef-48b8-870d-c6e5381cbbbf-kube-api-access-8js4l\") pod \"collect-profiles-29496870-c72gl\" (UID: \"a015a029-77ef-48b8-870d-c6e5381cbbbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.308341 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a015a029-77ef-48b8-870d-c6e5381cbbbf-config-volume\") pod \"collect-profiles-29496870-c72gl\" (UID: \"a015a029-77ef-48b8-870d-c6e5381cbbbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.309274 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a015a029-77ef-48b8-870d-c6e5381cbbbf-config-volume\") pod \"collect-profiles-29496870-c72gl\" (UID: \"a015a029-77ef-48b8-870d-c6e5381cbbbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.327016 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a015a029-77ef-48b8-870d-c6e5381cbbbf-secret-volume\") pod \"collect-profiles-29496870-c72gl\" (UID: \"a015a029-77ef-48b8-870d-c6e5381cbbbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.329357 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8js4l\" (UniqueName: \"kubernetes.io/projected/a015a029-77ef-48b8-870d-c6e5381cbbbf-kube-api-access-8js4l\") pod \"collect-profiles-29496870-c72gl\" (UID: \"a015a029-77ef-48b8-870d-c6e5381cbbbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.488394 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.994528 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl"] Jan 30 22:30:01 crc kubenswrapper[4751]: I0130 22:30:01.683265 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" event={"ID":"a015a029-77ef-48b8-870d-c6e5381cbbbf","Type":"ContainerStarted","Data":"24be1af0168ea685c3b9ac7cebe33603bdc5a928d7d9a415eddd1b85a3a97b25"} Jan 30 22:30:01 crc kubenswrapper[4751]: I0130 22:30:01.683319 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" event={"ID":"a015a029-77ef-48b8-870d-c6e5381cbbbf","Type":"ContainerStarted","Data":"db70346d1789d2c184fbb79f4672d3561cd1e382d2c767cd3d66034d0575acb3"} Jan 30 22:30:01 crc kubenswrapper[4751]: I0130 22:30:01.706822 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" podStartSLOduration=1.706802524 podStartE2EDuration="1.706802524s" podCreationTimestamp="2026-01-30 22:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:30:01.706653209 +0000 UTC m=+4540.452475858" watchObservedRunningTime="2026-01-30 22:30:01.706802524 +0000 UTC m=+4540.452625173" Jan 30 22:30:02 crc kubenswrapper[4751]: I0130 22:30:02.694904 4751 generic.go:334] "Generic (PLEG): container finished" podID="a015a029-77ef-48b8-870d-c6e5381cbbbf" containerID="24be1af0168ea685c3b9ac7cebe33603bdc5a928d7d9a415eddd1b85a3a97b25" exitCode=0 Jan 30 22:30:02 crc kubenswrapper[4751]: I0130 22:30:02.695299 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" event={"ID":"a015a029-77ef-48b8-870d-c6e5381cbbbf","Type":"ContainerDied","Data":"24be1af0168ea685c3b9ac7cebe33603bdc5a928d7d9a415eddd1b85a3a97b25"} Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.106142 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.200702 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a015a029-77ef-48b8-870d-c6e5381cbbbf-secret-volume\") pod \"a015a029-77ef-48b8-870d-c6e5381cbbbf\" (UID: \"a015a029-77ef-48b8-870d-c6e5381cbbbf\") " Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.200791 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8js4l\" (UniqueName: \"kubernetes.io/projected/a015a029-77ef-48b8-870d-c6e5381cbbbf-kube-api-access-8js4l\") pod \"a015a029-77ef-48b8-870d-c6e5381cbbbf\" (UID: \"a015a029-77ef-48b8-870d-c6e5381cbbbf\") " Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.201011 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a015a029-77ef-48b8-870d-c6e5381cbbbf-config-volume\") pod \"a015a029-77ef-48b8-870d-c6e5381cbbbf\" (UID: \"a015a029-77ef-48b8-870d-c6e5381cbbbf\") " Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.201706 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a015a029-77ef-48b8-870d-c6e5381cbbbf-config-volume" (OuterVolumeSpecName: "config-volume") pod "a015a029-77ef-48b8-870d-c6e5381cbbbf" (UID: "a015a029-77ef-48b8-870d-c6e5381cbbbf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.202087 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a015a029-77ef-48b8-870d-c6e5381cbbbf-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.207018 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a015a029-77ef-48b8-870d-c6e5381cbbbf-kube-api-access-8js4l" (OuterVolumeSpecName: "kube-api-access-8js4l") pod "a015a029-77ef-48b8-870d-c6e5381cbbbf" (UID: "a015a029-77ef-48b8-870d-c6e5381cbbbf"). InnerVolumeSpecName "kube-api-access-8js4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.214551 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a015a029-77ef-48b8-870d-c6e5381cbbbf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a015a029-77ef-48b8-870d-c6e5381cbbbf" (UID: "a015a029-77ef-48b8-870d-c6e5381cbbbf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.303911 4751 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a015a029-77ef-48b8-870d-c6e5381cbbbf-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.304154 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8js4l\" (UniqueName: \"kubernetes.io/projected/a015a029-77ef-48b8-870d-c6e5381cbbbf-kube-api-access-8js4l\") on node \"crc\" DevicePath \"\"" Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.716053 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" event={"ID":"a015a029-77ef-48b8-870d-c6e5381cbbbf","Type":"ContainerDied","Data":"db70346d1789d2c184fbb79f4672d3561cd1e382d2c767cd3d66034d0575acb3"} Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.716689 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db70346d1789d2c184fbb79f4672d3561cd1e382d2c767cd3d66034d0575acb3" Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.716165 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.790388 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc"] Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.803236 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc"] Jan 30 22:30:05 crc kubenswrapper[4751]: I0130 22:30:05.976306 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:30:05 crc kubenswrapper[4751]: E0130 22:30:05.976862 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:30:05 crc kubenswrapper[4751]: I0130 22:30:05.994551 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60a5fa77-b23e-417a-9854-929675be1c58" path="/var/lib/kubelet/pods/60a5fa77-b23e-417a-9854-929675be1c58/volumes" Jan 30 22:30:17 crc kubenswrapper[4751]: I0130 22:30:17.975846 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:30:17 crc kubenswrapper[4751]: E0130 22:30:17.976730 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:30:28 crc kubenswrapper[4751]: I0130 22:30:28.852979 4751 scope.go:117] "RemoveContainer" containerID="a925b908937d8dd9436a4992fc297b882d7c680a8bb02a09739b64f2a561f95a" Jan 30 22:30:29 crc kubenswrapper[4751]: I0130 22:30:29.976272 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:30:30 crc kubenswrapper[4751]: I0130 22:30:30.999054 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"99d2a0709014fe5031de012673bab8841bfdcebadd3c614ac1c6d9e193438395"} Jan 30 22:31:45 crc kubenswrapper[4751]: I0130 22:31:45.954915 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 22:31:45 crc kubenswrapper[4751]: E0130 22:31:45.955862 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a015a029-77ef-48b8-870d-c6e5381cbbbf" containerName="collect-profiles" Jan 30 22:31:45 crc kubenswrapper[4751]: I0130 22:31:45.955877 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a015a029-77ef-48b8-870d-c6e5381cbbbf" containerName="collect-profiles" Jan 30 22:31:45 crc kubenswrapper[4751]: I0130 22:31:45.956152 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a015a029-77ef-48b8-870d-c6e5381cbbbf" containerName="collect-profiles" Jan 30 22:31:45 crc kubenswrapper[4751]: I0130 22:31:45.956919 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 22:31:45 crc kubenswrapper[4751]: I0130 22:31:45.962076 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 30 22:31:45 crc kubenswrapper[4751]: I0130 22:31:45.963246 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 30 22:31:45 crc kubenswrapper[4751]: I0130 22:31:45.963604 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-dvc9j" Jan 30 22:31:45 crc kubenswrapper[4751]: I0130 22:31:45.968191 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 22:31:45 crc kubenswrapper[4751]: I0130 22:31:45.970782 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.040561 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/053bddc4-b1a1-4951-af33-6230acd3ee0b-config-data\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.040823 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/053bddc4-b1a1-4951-af33-6230acd3ee0b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.040930 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.142782 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.142857 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/053bddc4-b1a1-4951-af33-6230acd3ee0b-config-data\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.142887 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.142986 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/053bddc4-b1a1-4951-af33-6230acd3ee0b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.143068 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/053bddc4-b1a1-4951-af33-6230acd3ee0b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.143125 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/053bddc4-b1a1-4951-af33-6230acd3ee0b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.143165 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.143250 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.143419 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnm9k\" (UniqueName: \"kubernetes.io/projected/053bddc4-b1a1-4951-af33-6230acd3ee0b-kube-api-access-cnm9k\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.144525 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/053bddc4-b1a1-4951-af33-6230acd3ee0b-config-data\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.144992 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/053bddc4-b1a1-4951-af33-6230acd3ee0b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.149478 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.245633 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/053bddc4-b1a1-4951-af33-6230acd3ee0b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.245720 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.245782 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnm9k\" (UniqueName: \"kubernetes.io/projected/053bddc4-b1a1-4951-af33-6230acd3ee0b-kube-api-access-cnm9k\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.245828 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.245858 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.245907 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/053bddc4-b1a1-4951-af33-6230acd3ee0b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.246318 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/053bddc4-b1a1-4951-af33-6230acd3ee0b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.246712 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/053bddc4-b1a1-4951-af33-6230acd3ee0b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.247284 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.249422 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.249526 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.266409 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnm9k\" (UniqueName: \"kubernetes.io/projected/053bddc4-b1a1-4951-af33-6230acd3ee0b-kube-api-access-cnm9k\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.279979 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.292427 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.801548 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.801875 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.837494 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"053bddc4-b1a1-4951-af33-6230acd3ee0b","Type":"ContainerStarted","Data":"01b3d137ed8bb5af449d591205e958b031f5ad78d5d86311bd69b7e07f52d896"} Jan 30 22:32:24 crc kubenswrapper[4751]: E0130 22:32:24.902697 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 30 22:32:24 crc kubenswrapper[4751]: E0130 22:32:24.905147 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cnm9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(053bddc4-b1a1-4951-af33-6230acd3ee0b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 22:32:24 crc kubenswrapper[4751]: E0130 22:32:24.906375 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="053bddc4-b1a1-4951-af33-6230acd3ee0b" Jan 30 22:32:25 crc kubenswrapper[4751]: E0130 22:32:25.361234 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="053bddc4-b1a1-4951-af33-6230acd3ee0b" Jan 30 22:32:40 crc kubenswrapper[4751]: I0130 22:32:40.430090 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 30 22:32:42 crc kubenswrapper[4751]: I0130 22:32:42.592427 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"053bddc4-b1a1-4951-af33-6230acd3ee0b","Type":"ContainerStarted","Data":"4a57649ebddefdd6cfb7979e8b07856c36ff49932c8103c4cfd06fb309f09454"} Jan 30 22:32:42 crc kubenswrapper[4751]: I0130 22:32:42.613027 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.987366335 podStartE2EDuration="58.613003293s" podCreationTimestamp="2026-01-30 22:31:44 +0000 UTC" firstStartedPulling="2026-01-30 22:31:46.801487424 +0000 UTC m=+4645.547310103" lastFinishedPulling="2026-01-30 22:32:40.427124412 +0000 UTC m=+4699.172947061" observedRunningTime="2026-01-30 22:32:42.608920992 +0000 UTC m=+4701.354743641" watchObservedRunningTime="2026-01-30 22:32:42.613003293 +0000 UTC m=+4701.358825982" Jan 30 22:32:54 crc kubenswrapper[4751]: I0130 22:32:54.126726 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:32:54 crc kubenswrapper[4751]: I0130 22:32:54.127174 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:33:24 crc kubenswrapper[4751]: I0130 22:33:24.126674 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:33:24 crc kubenswrapper[4751]: I0130 22:33:24.127281 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:33:40 crc kubenswrapper[4751]: I0130 22:33:40.204779 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mmdjh"] Jan 30 22:33:40 crc kubenswrapper[4751]: I0130 22:33:40.228340 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:33:40 crc kubenswrapper[4751]: I0130 22:33:40.326479 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mmdjh"] Jan 30 22:33:40 crc kubenswrapper[4751]: I0130 22:33:40.338850 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-utilities\") pod \"redhat-operators-mmdjh\" (UID: \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\") " pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:33:40 crc kubenswrapper[4751]: I0130 22:33:40.352041 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qz2j\" (UniqueName: \"kubernetes.io/projected/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-kube-api-access-7qz2j\") pod \"redhat-operators-mmdjh\" (UID: \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\") " pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:33:40 crc kubenswrapper[4751]: I0130 22:33:40.352392 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-catalog-content\") pod \"redhat-operators-mmdjh\" (UID: \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\") " pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:33:40 crc kubenswrapper[4751]: I0130 22:33:40.455001 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-utilities\") pod \"redhat-operators-mmdjh\" (UID: \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\") " pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:33:40 crc kubenswrapper[4751]: I0130 22:33:40.455275 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qz2j\" (UniqueName: \"kubernetes.io/projected/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-kube-api-access-7qz2j\") pod \"redhat-operators-mmdjh\" (UID: \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\") " pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:33:40 crc kubenswrapper[4751]: I0130 22:33:40.455408 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-catalog-content\") pod \"redhat-operators-mmdjh\" (UID: \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\") " pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:33:40 crc kubenswrapper[4751]: I0130 22:33:40.460011 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-utilities\") pod \"redhat-operators-mmdjh\" (UID: \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\") " pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:33:40 crc kubenswrapper[4751]: I0130 22:33:40.461495 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-catalog-content\") pod \"redhat-operators-mmdjh\" (UID: \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\") " pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:33:40 crc kubenswrapper[4751]: I0130 22:33:40.491888 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qz2j\" (UniqueName: \"kubernetes.io/projected/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-kube-api-access-7qz2j\") pod \"redhat-operators-mmdjh\" (UID: \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\") " pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:33:40 crc kubenswrapper[4751]: I0130 22:33:40.567857 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:33:41 crc kubenswrapper[4751]: I0130 22:33:41.875705 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mmdjh"] Jan 30 22:33:42 crc kubenswrapper[4751]: I0130 22:33:42.262233 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mmdjh" event={"ID":"ad7b70f7-4a24-4ecf-825b-29383cc2b01e","Type":"ContainerDied","Data":"f17170448b693703ee174a9d749a77d124f9d6b30e425ad03edc67f93a29f5be"} Jan 30 22:33:42 crc kubenswrapper[4751]: I0130 22:33:42.262700 4751 generic.go:334] "Generic (PLEG): container finished" podID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerID="f17170448b693703ee174a9d749a77d124f9d6b30e425ad03edc67f93a29f5be" exitCode=0 Jan 30 22:33:42 crc kubenswrapper[4751]: I0130 22:33:42.263042 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mmdjh" event={"ID":"ad7b70f7-4a24-4ecf-825b-29383cc2b01e","Type":"ContainerStarted","Data":"f56f6afdb1c34b3fd6832f87715462fd3fb2f665ac0ee6d4689e6af74b5ac7ce"} Jan 30 22:33:44 crc kubenswrapper[4751]: I0130 22:33:44.287532 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mmdjh" event={"ID":"ad7b70f7-4a24-4ecf-825b-29383cc2b01e","Type":"ContainerStarted","Data":"b06670816b157fe9b19611691d748c7b693824ae882d3b3e3db74ddbf02c8d12"} Jan 30 22:33:50 crc kubenswrapper[4751]: I0130 22:33:50.353927 4751 generic.go:334] "Generic (PLEG): container finished" podID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerID="b06670816b157fe9b19611691d748c7b693824ae882d3b3e3db74ddbf02c8d12" exitCode=0 Jan 30 22:33:50 crc kubenswrapper[4751]: I0130 22:33:50.354308 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mmdjh" event={"ID":"ad7b70f7-4a24-4ecf-825b-29383cc2b01e","Type":"ContainerDied","Data":"b06670816b157fe9b19611691d748c7b693824ae882d3b3e3db74ddbf02c8d12"} Jan 30 22:33:51 crc kubenswrapper[4751]: I0130 22:33:51.368474 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mmdjh" event={"ID":"ad7b70f7-4a24-4ecf-825b-29383cc2b01e","Type":"ContainerStarted","Data":"4be112339abc0ac5a431288d01257e3e3f98ce652cfb9dc78b3e90a13c956b45"} Jan 30 22:33:54 crc kubenswrapper[4751]: I0130 22:33:54.127275 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:33:54 crc kubenswrapper[4751]: I0130 22:33:54.127973 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:33:54 crc kubenswrapper[4751]: I0130 22:33:54.128030 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 22:33:54 crc kubenswrapper[4751]: I0130 22:33:54.129046 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"99d2a0709014fe5031de012673bab8841bfdcebadd3c614ac1c6d9e193438395"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:33:54 crc kubenswrapper[4751]: I0130 22:33:54.130975 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://99d2a0709014fe5031de012673bab8841bfdcebadd3c614ac1c6d9e193438395" gracePeriod=600 Jan 30 22:33:54 crc kubenswrapper[4751]: I0130 22:33:54.399391 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="99d2a0709014fe5031de012673bab8841bfdcebadd3c614ac1c6d9e193438395" exitCode=0 Jan 30 22:33:54 crc kubenswrapper[4751]: I0130 22:33:54.399526 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"99d2a0709014fe5031de012673bab8841bfdcebadd3c614ac1c6d9e193438395"} Jan 30 22:33:54 crc kubenswrapper[4751]: I0130 22:33:54.399786 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:33:55 crc kubenswrapper[4751]: I0130 22:33:55.417091 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd"} Jan 30 22:33:55 crc kubenswrapper[4751]: I0130 22:33:55.460901 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mmdjh" podStartSLOduration=6.888034032 podStartE2EDuration="15.460876277s" podCreationTimestamp="2026-01-30 22:33:40 +0000 UTC" firstStartedPulling="2026-01-30 22:33:42.265154151 +0000 UTC m=+4761.010976800" lastFinishedPulling="2026-01-30 22:33:50.837996396 +0000 UTC m=+4769.583819045" observedRunningTime="2026-01-30 22:33:51.39308498 +0000 UTC m=+4770.138907629" watchObservedRunningTime="2026-01-30 22:33:55.460876277 +0000 UTC m=+4774.206698926" Jan 30 22:34:00 crc kubenswrapper[4751]: I0130 22:34:00.570221 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:34:00 crc kubenswrapper[4751]: I0130 22:34:00.570832 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:34:01 crc kubenswrapper[4751]: I0130 22:34:01.620021 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mmdjh" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="registry-server" probeResult="failure" output=< Jan 30 22:34:01 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:34:01 crc kubenswrapper[4751]: > Jan 30 22:34:02 crc kubenswrapper[4751]: I0130 22:34:02.144709 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pscx6"] Jan 30 22:34:02 crc kubenswrapper[4751]: I0130 22:34:02.147572 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:02 crc kubenswrapper[4751]: I0130 22:34:02.184071 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pscx6"] Jan 30 22:34:02 crc kubenswrapper[4751]: I0130 22:34:02.205464 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9d691f-2785-4248-80d8-903f36ff7f1f-catalog-content\") pod \"certified-operators-pscx6\" (UID: \"fd9d691f-2785-4248-80d8-903f36ff7f1f\") " pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:02 crc kubenswrapper[4751]: I0130 22:34:02.205627 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9d691f-2785-4248-80d8-903f36ff7f1f-utilities\") pod \"certified-operators-pscx6\" (UID: \"fd9d691f-2785-4248-80d8-903f36ff7f1f\") " pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:02 crc kubenswrapper[4751]: I0130 22:34:02.205650 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7hns\" (UniqueName: \"kubernetes.io/projected/fd9d691f-2785-4248-80d8-903f36ff7f1f-kube-api-access-p7hns\") pod \"certified-operators-pscx6\" (UID: \"fd9d691f-2785-4248-80d8-903f36ff7f1f\") " pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:02 crc kubenswrapper[4751]: I0130 22:34:02.307501 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9d691f-2785-4248-80d8-903f36ff7f1f-utilities\") pod \"certified-operators-pscx6\" (UID: \"fd9d691f-2785-4248-80d8-903f36ff7f1f\") " pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:02 crc kubenswrapper[4751]: I0130 22:34:02.307561 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7hns\" (UniqueName: \"kubernetes.io/projected/fd9d691f-2785-4248-80d8-903f36ff7f1f-kube-api-access-p7hns\") pod \"certified-operators-pscx6\" (UID: \"fd9d691f-2785-4248-80d8-903f36ff7f1f\") " pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:02 crc kubenswrapper[4751]: I0130 22:34:02.307837 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9d691f-2785-4248-80d8-903f36ff7f1f-catalog-content\") pod \"certified-operators-pscx6\" (UID: \"fd9d691f-2785-4248-80d8-903f36ff7f1f\") " pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:02 crc kubenswrapper[4751]: I0130 22:34:02.309237 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9d691f-2785-4248-80d8-903f36ff7f1f-utilities\") pod \"certified-operators-pscx6\" (UID: \"fd9d691f-2785-4248-80d8-903f36ff7f1f\") " pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:02 crc kubenswrapper[4751]: I0130 22:34:02.309729 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9d691f-2785-4248-80d8-903f36ff7f1f-catalog-content\") pod \"certified-operators-pscx6\" (UID: \"fd9d691f-2785-4248-80d8-903f36ff7f1f\") " pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:02 crc kubenswrapper[4751]: I0130 22:34:02.784760 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7hns\" (UniqueName: \"kubernetes.io/projected/fd9d691f-2785-4248-80d8-903f36ff7f1f-kube-api-access-p7hns\") pod \"certified-operators-pscx6\" (UID: \"fd9d691f-2785-4248-80d8-903f36ff7f1f\") " pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:03 crc kubenswrapper[4751]: I0130 22:34:03.077526 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:03 crc kubenswrapper[4751]: I0130 22:34:03.954363 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pscx6"] Jan 30 22:34:04 crc kubenswrapper[4751]: I0130 22:34:04.523856 4751 generic.go:334] "Generic (PLEG): container finished" podID="fd9d691f-2785-4248-80d8-903f36ff7f1f" containerID="6648c3460f5d5803211a53bb4c08c8569982bf978ff508c811d83df2f6906ec9" exitCode=0 Jan 30 22:34:04 crc kubenswrapper[4751]: I0130 22:34:04.523924 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pscx6" event={"ID":"fd9d691f-2785-4248-80d8-903f36ff7f1f","Type":"ContainerDied","Data":"6648c3460f5d5803211a53bb4c08c8569982bf978ff508c811d83df2f6906ec9"} Jan 30 22:34:04 crc kubenswrapper[4751]: I0130 22:34:04.524146 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pscx6" event={"ID":"fd9d691f-2785-4248-80d8-903f36ff7f1f","Type":"ContainerStarted","Data":"6ad69d79162c25ac3263a83274772fdd49a6d71406dc5eefadff25bccf952620"} Jan 30 22:34:10 crc kubenswrapper[4751]: I0130 22:34:10.601758 4751 generic.go:334] "Generic (PLEG): container finished" podID="fd9d691f-2785-4248-80d8-903f36ff7f1f" containerID="2e43ad21a17b63ad8acc030f78f472adb49950c157beb3129e02bf74fba4aeaf" exitCode=0 Jan 30 22:34:10 crc kubenswrapper[4751]: I0130 22:34:10.601882 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pscx6" event={"ID":"fd9d691f-2785-4248-80d8-903f36ff7f1f","Type":"ContainerDied","Data":"2e43ad21a17b63ad8acc030f78f472adb49950c157beb3129e02bf74fba4aeaf"} Jan 30 22:34:11 crc kubenswrapper[4751]: I0130 22:34:11.615973 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pscx6" event={"ID":"fd9d691f-2785-4248-80d8-903f36ff7f1f","Type":"ContainerStarted","Data":"6479c675db0b9a47e314dc95f0bdf6f15cc383d8a737d250a2d08ecf07b4e508"} Jan 30 22:34:11 crc kubenswrapper[4751]: I0130 22:34:11.636226 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mmdjh" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="registry-server" probeResult="failure" output=< Jan 30 22:34:11 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:34:11 crc kubenswrapper[4751]: > Jan 30 22:34:11 crc kubenswrapper[4751]: I0130 22:34:11.645241 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pscx6" podStartSLOduration=2.851357857 podStartE2EDuration="9.645209796s" podCreationTimestamp="2026-01-30 22:34:02 +0000 UTC" firstStartedPulling="2026-01-30 22:34:04.525674855 +0000 UTC m=+4783.271497504" lastFinishedPulling="2026-01-30 22:34:11.319526794 +0000 UTC m=+4790.065349443" observedRunningTime="2026-01-30 22:34:11.630646622 +0000 UTC m=+4790.376469291" watchObservedRunningTime="2026-01-30 22:34:11.645209796 +0000 UTC m=+4790.391032445" Jan 30 22:34:13 crc kubenswrapper[4751]: I0130 22:34:13.078467 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:13 crc kubenswrapper[4751]: I0130 22:34:13.078818 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:13 crc kubenswrapper[4751]: I0130 22:34:13.133226 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:21 crc kubenswrapper[4751]: I0130 22:34:21.622720 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mmdjh" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="registry-server" probeResult="failure" output=< Jan 30 22:34:21 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:34:21 crc kubenswrapper[4751]: > Jan 30 22:34:23 crc kubenswrapper[4751]: I0130 22:34:23.153889 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:23 crc kubenswrapper[4751]: I0130 22:34:23.316076 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pscx6"] Jan 30 22:34:23 crc kubenswrapper[4751]: I0130 22:34:23.370392 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kcjb7"] Jan 30 22:34:23 crc kubenswrapper[4751]: I0130 22:34:23.742911 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kcjb7" podUID="e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" containerName="registry-server" containerID="cri-o://866096c9e0962b4450aeafceff9a6e799efcc8a53a6c4825b431141eedcb2cac" gracePeriod=2 Jan 30 22:34:24 crc kubenswrapper[4751]: I0130 22:34:24.753630 4751 generic.go:334] "Generic (PLEG): container finished" podID="e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" containerID="866096c9e0962b4450aeafceff9a6e799efcc8a53a6c4825b431141eedcb2cac" exitCode=0 Jan 30 22:34:24 crc kubenswrapper[4751]: I0130 22:34:24.753718 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcjb7" event={"ID":"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776","Type":"ContainerDied","Data":"866096c9e0962b4450aeafceff9a6e799efcc8a53a6c4825b431141eedcb2cac"} Jan 30 22:34:24 crc kubenswrapper[4751]: I0130 22:34:24.887834 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.072608 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbtz7\" (UniqueName: \"kubernetes.io/projected/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-kube-api-access-qbtz7\") pod \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\" (UID: \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\") " Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.073114 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-catalog-content\") pod \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\" (UID: \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\") " Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.073348 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-utilities\") pod \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\" (UID: \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\") " Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.076157 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-utilities" (OuterVolumeSpecName: "utilities") pod "e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" (UID: "e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.095067 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-kube-api-access-qbtz7" (OuterVolumeSpecName: "kube-api-access-qbtz7") pod "e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" (UID: "e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776"). InnerVolumeSpecName "kube-api-access-qbtz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.176219 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.176262 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbtz7\" (UniqueName: \"kubernetes.io/projected/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-kube-api-access-qbtz7\") on node \"crc\" DevicePath \"\"" Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.176723 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" (UID: "e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.277770 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.764744 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcjb7" event={"ID":"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776","Type":"ContainerDied","Data":"95551a4e245bf6ad27a1b26cb62b9724a5be7406b4d6229b016888f12ca7d6d4"} Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.764992 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.765763 4751 scope.go:117] "RemoveContainer" containerID="866096c9e0962b4450aeafceff9a6e799efcc8a53a6c4825b431141eedcb2cac" Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.805851 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kcjb7"] Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.816347 4751 scope.go:117] "RemoveContainer" containerID="b9102fc49cd164d867074d03c63d8593be70d6d663c1f645db5a7cf70fe3ec65" Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.817349 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kcjb7"] Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.848822 4751 scope.go:117] "RemoveContainer" containerID="f0ff7f17884024cadb59819e4114f64f13e4c4199dcbe665c88b3d9400eb196b" Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.993737 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" path="/var/lib/kubelet/pods/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776/volumes" Jan 30 22:34:31 crc kubenswrapper[4751]: I0130 22:34:31.626695 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mmdjh" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="registry-server" probeResult="failure" output=< Jan 30 22:34:31 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:34:31 crc kubenswrapper[4751]: > Jan 30 22:34:41 crc kubenswrapper[4751]: I0130 22:34:41.623923 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mmdjh" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="registry-server" probeResult="failure" output=< Jan 30 22:34:41 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:34:41 crc kubenswrapper[4751]: > Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.041399 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2jv98"] Jan 30 22:34:44 crc kubenswrapper[4751]: E0130 22:34:44.048888 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" containerName="registry-server" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.048942 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" containerName="registry-server" Jan 30 22:34:44 crc kubenswrapper[4751]: E0130 22:34:44.048976 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" containerName="extract-utilities" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.049006 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" containerName="extract-utilities" Jan 30 22:34:44 crc kubenswrapper[4751]: E0130 22:34:44.049029 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" containerName="extract-content" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.049051 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" containerName="extract-content" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.050969 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" containerName="registry-server" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.063842 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.142846 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzzxp\" (UniqueName: \"kubernetes.io/projected/328960d4-cdf9-4134-b966-af48db38c682-kube-api-access-nzzxp\") pod \"redhat-marketplace-2jv98\" (UID: \"328960d4-cdf9-4134-b966-af48db38c682\") " pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.143638 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328960d4-cdf9-4134-b966-af48db38c682-utilities\") pod \"redhat-marketplace-2jv98\" (UID: \"328960d4-cdf9-4134-b966-af48db38c682\") " pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.143714 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328960d4-cdf9-4134-b966-af48db38c682-catalog-content\") pod \"redhat-marketplace-2jv98\" (UID: \"328960d4-cdf9-4134-b966-af48db38c682\") " pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.245734 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jv98"] Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.246818 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328960d4-cdf9-4134-b966-af48db38c682-catalog-content\") pod \"redhat-marketplace-2jv98\" (UID: \"328960d4-cdf9-4134-b966-af48db38c682\") " pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.246989 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzzxp\" (UniqueName: \"kubernetes.io/projected/328960d4-cdf9-4134-b966-af48db38c682-kube-api-access-nzzxp\") pod \"redhat-marketplace-2jv98\" (UID: \"328960d4-cdf9-4134-b966-af48db38c682\") " pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.247145 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328960d4-cdf9-4134-b966-af48db38c682-utilities\") pod \"redhat-marketplace-2jv98\" (UID: \"328960d4-cdf9-4134-b966-af48db38c682\") " pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.259822 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328960d4-cdf9-4134-b966-af48db38c682-catalog-content\") pod \"redhat-marketplace-2jv98\" (UID: \"328960d4-cdf9-4134-b966-af48db38c682\") " pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.260424 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328960d4-cdf9-4134-b966-af48db38c682-utilities\") pod \"redhat-marketplace-2jv98\" (UID: \"328960d4-cdf9-4134-b966-af48db38c682\") " pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.309009 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzzxp\" (UniqueName: \"kubernetes.io/projected/328960d4-cdf9-4134-b966-af48db38c682-kube-api-access-nzzxp\") pod \"redhat-marketplace-2jv98\" (UID: \"328960d4-cdf9-4134-b966-af48db38c682\") " pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.422073 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:34:45 crc kubenswrapper[4751]: I0130 22:34:45.844704 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jv98"] Jan 30 22:34:45 crc kubenswrapper[4751]: I0130 22:34:45.998922 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jv98" event={"ID":"328960d4-cdf9-4134-b966-af48db38c682","Type":"ContainerStarted","Data":"3e4db97d3dab806bbea89c7b116ea6482682a310a5e96ab17c7a34a929e79d26"} Jan 30 22:34:47 crc kubenswrapper[4751]: I0130 22:34:47.012440 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jv98" event={"ID":"328960d4-cdf9-4134-b966-af48db38c682","Type":"ContainerDied","Data":"b7d035e0c8a1729f7bd553455a75aa5d582ba92d231e9e7e3235b4066f573ef7"} Jan 30 22:34:47 crc kubenswrapper[4751]: I0130 22:34:47.014754 4751 generic.go:334] "Generic (PLEG): container finished" podID="328960d4-cdf9-4134-b966-af48db38c682" containerID="b7d035e0c8a1729f7bd553455a75aa5d582ba92d231e9e7e3235b4066f573ef7" exitCode=0 Jan 30 22:34:49 crc kubenswrapper[4751]: I0130 22:34:49.037059 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jv98" event={"ID":"328960d4-cdf9-4134-b966-af48db38c682","Type":"ContainerStarted","Data":"84a7032a8ce506245c02b101a1f3d62fe28f089900b8b08794dd653c2af3888d"} Jan 30 22:34:50 crc kubenswrapper[4751]: I0130 22:34:50.049390 4751 generic.go:334] "Generic (PLEG): container finished" podID="328960d4-cdf9-4134-b966-af48db38c682" containerID="84a7032a8ce506245c02b101a1f3d62fe28f089900b8b08794dd653c2af3888d" exitCode=0 Jan 30 22:34:50 crc kubenswrapper[4751]: I0130 22:34:50.049443 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jv98" event={"ID":"328960d4-cdf9-4134-b966-af48db38c682","Type":"ContainerDied","Data":"84a7032a8ce506245c02b101a1f3d62fe28f089900b8b08794dd653c2af3888d"} Jan 30 22:34:51 crc kubenswrapper[4751]: I0130 22:34:51.065699 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jv98" event={"ID":"328960d4-cdf9-4134-b966-af48db38c682","Type":"ContainerStarted","Data":"17f2dbf626097fd957f4861f087689c5c713092f2977ce4cc3de8f3499d1d956"} Jan 30 22:34:51 crc kubenswrapper[4751]: I0130 22:34:51.110831 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2jv98" podStartSLOduration=4.5105243139999995 podStartE2EDuration="8.106373584s" podCreationTimestamp="2026-01-30 22:34:43 +0000 UTC" firstStartedPulling="2026-01-30 22:34:47.016176878 +0000 UTC m=+4825.761999537" lastFinishedPulling="2026-01-30 22:34:50.612026158 +0000 UTC m=+4829.357848807" observedRunningTime="2026-01-30 22:34:51.100152405 +0000 UTC m=+4829.845975084" watchObservedRunningTime="2026-01-30 22:34:51.106373584 +0000 UTC m=+4829.852196253" Jan 30 22:34:51 crc kubenswrapper[4751]: I0130 22:34:51.688811 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mmdjh" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="registry-server" probeResult="failure" output=< Jan 30 22:34:51 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:34:51 crc kubenswrapper[4751]: > Jan 30 22:34:54 crc kubenswrapper[4751]: I0130 22:34:54.423525 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:34:54 crc kubenswrapper[4751]: I0130 22:34:54.424096 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:34:55 crc kubenswrapper[4751]: I0130 22:34:55.481981 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-2jv98" podUID="328960d4-cdf9-4134-b966-af48db38c682" containerName="registry-server" probeResult="failure" output=< Jan 30 22:34:55 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:34:55 crc kubenswrapper[4751]: > Jan 30 22:35:01 crc kubenswrapper[4751]: I0130 22:35:01.636392 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mmdjh" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="registry-server" probeResult="failure" output=< Jan 30 22:35:01 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:35:01 crc kubenswrapper[4751]: > Jan 30 22:35:04 crc kubenswrapper[4751]: I0130 22:35:04.494524 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:35:04 crc kubenswrapper[4751]: I0130 22:35:04.553467 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:35:04 crc kubenswrapper[4751]: I0130 22:35:04.795659 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jv98"] Jan 30 22:35:06 crc kubenswrapper[4751]: I0130 22:35:06.225846 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2jv98" podUID="328960d4-cdf9-4134-b966-af48db38c682" containerName="registry-server" containerID="cri-o://17f2dbf626097fd957f4861f087689c5c713092f2977ce4cc3de8f3499d1d956" gracePeriod=2 Jan 30 22:35:07 crc kubenswrapper[4751]: I0130 22:35:07.276994 4751 generic.go:334] "Generic (PLEG): container finished" podID="328960d4-cdf9-4134-b966-af48db38c682" containerID="17f2dbf626097fd957f4861f087689c5c713092f2977ce4cc3de8f3499d1d956" exitCode=0 Jan 30 22:35:07 crc kubenswrapper[4751]: I0130 22:35:07.277700 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jv98" event={"ID":"328960d4-cdf9-4134-b966-af48db38c682","Type":"ContainerDied","Data":"17f2dbf626097fd957f4861f087689c5c713092f2977ce4cc3de8f3499d1d956"} Jan 30 22:35:07 crc kubenswrapper[4751]: I0130 22:35:07.926764 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.000584 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328960d4-cdf9-4134-b966-af48db38c682-utilities\") pod \"328960d4-cdf9-4134-b966-af48db38c682\" (UID: \"328960d4-cdf9-4134-b966-af48db38c682\") " Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.000992 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzzxp\" (UniqueName: \"kubernetes.io/projected/328960d4-cdf9-4134-b966-af48db38c682-kube-api-access-nzzxp\") pod \"328960d4-cdf9-4134-b966-af48db38c682\" (UID: \"328960d4-cdf9-4134-b966-af48db38c682\") " Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.001063 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328960d4-cdf9-4134-b966-af48db38c682-catalog-content\") pod \"328960d4-cdf9-4134-b966-af48db38c682\" (UID: \"328960d4-cdf9-4134-b966-af48db38c682\") " Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.036785 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/328960d4-cdf9-4134-b966-af48db38c682-utilities" (OuterVolumeSpecName: "utilities") pod "328960d4-cdf9-4134-b966-af48db38c682" (UID: "328960d4-cdf9-4134-b966-af48db38c682"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.075073 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/328960d4-cdf9-4134-b966-af48db38c682-kube-api-access-nzzxp" (OuterVolumeSpecName: "kube-api-access-nzzxp") pod "328960d4-cdf9-4134-b966-af48db38c682" (UID: "328960d4-cdf9-4134-b966-af48db38c682"). InnerVolumeSpecName "kube-api-access-nzzxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.096457 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/328960d4-cdf9-4134-b966-af48db38c682-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "328960d4-cdf9-4134-b966-af48db38c682" (UID: "328960d4-cdf9-4134-b966-af48db38c682"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.108409 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzzxp\" (UniqueName: \"kubernetes.io/projected/328960d4-cdf9-4134-b966-af48db38c682-kube-api-access-nzzxp\") on node \"crc\" DevicePath \"\"" Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.108444 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328960d4-cdf9-4134-b966-af48db38c682-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.108455 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328960d4-cdf9-4134-b966-af48db38c682-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.293366 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jv98" event={"ID":"328960d4-cdf9-4134-b966-af48db38c682","Type":"ContainerDied","Data":"3e4db97d3dab806bbea89c7b116ea6482682a310a5e96ab17c7a34a929e79d26"} Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.293468 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.295601 4751 scope.go:117] "RemoveContainer" containerID="17f2dbf626097fd957f4861f087689c5c713092f2977ce4cc3de8f3499d1d956" Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.336572 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jv98"] Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.350421 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jv98"] Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.362729 4751 scope.go:117] "RemoveContainer" containerID="84a7032a8ce506245c02b101a1f3d62fe28f089900b8b08794dd653c2af3888d" Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.410269 4751 scope.go:117] "RemoveContainer" containerID="b7d035e0c8a1729f7bd553455a75aa5d582ba92d231e9e7e3235b4066f573ef7" Jan 30 22:35:09 crc kubenswrapper[4751]: I0130 22:35:09.995501 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="328960d4-cdf9-4134-b966-af48db38c682" path="/var/lib/kubelet/pods/328960d4-cdf9-4134-b966-af48db38c682/volumes" Jan 30 22:35:10 crc kubenswrapper[4751]: I0130 22:35:10.770173 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:35:10 crc kubenswrapper[4751]: I0130 22:35:10.872565 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:35:11 crc kubenswrapper[4751]: I0130 22:35:11.168523 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mmdjh"] Jan 30 22:35:12 crc kubenswrapper[4751]: I0130 22:35:12.331421 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mmdjh" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="registry-server" containerID="cri-o://4be112339abc0ac5a431288d01257e3e3f98ce652cfb9dc78b3e90a13c956b45" gracePeriod=2 Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.170788 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.238534 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-utilities\") pod \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\" (UID: \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\") " Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.238815 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qz2j\" (UniqueName: \"kubernetes.io/projected/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-kube-api-access-7qz2j\") pod \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\" (UID: \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\") " Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.238931 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-catalog-content\") pod \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\" (UID: \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\") " Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.242809 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-utilities" (OuterVolumeSpecName: "utilities") pod "ad7b70f7-4a24-4ecf-825b-29383cc2b01e" (UID: "ad7b70f7-4a24-4ecf-825b-29383cc2b01e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.264814 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-kube-api-access-7qz2j" (OuterVolumeSpecName: "kube-api-access-7qz2j") pod "ad7b70f7-4a24-4ecf-825b-29383cc2b01e" (UID: "ad7b70f7-4a24-4ecf-825b-29383cc2b01e"). InnerVolumeSpecName "kube-api-access-7qz2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.343124 4751 generic.go:334] "Generic (PLEG): container finished" podID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerID="4be112339abc0ac5a431288d01257e3e3f98ce652cfb9dc78b3e90a13c956b45" exitCode=0 Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.343186 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mmdjh" event={"ID":"ad7b70f7-4a24-4ecf-825b-29383cc2b01e","Type":"ContainerDied","Data":"4be112339abc0ac5a431288d01257e3e3f98ce652cfb9dc78b3e90a13c956b45"} Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.343212 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mmdjh" event={"ID":"ad7b70f7-4a24-4ecf-825b-29383cc2b01e","Type":"ContainerDied","Data":"f56f6afdb1c34b3fd6832f87715462fd3fb2f665ac0ee6d4689e6af74b5ac7ce"} Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.343234 4751 scope.go:117] "RemoveContainer" containerID="4be112339abc0ac5a431288d01257e3e3f98ce652cfb9dc78b3e90a13c956b45" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.343255 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.343300 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qz2j\" (UniqueName: \"kubernetes.io/projected/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-kube-api-access-7qz2j\") on node \"crc\" DevicePath \"\"" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.343371 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.374363 4751 scope.go:117] "RemoveContainer" containerID="b06670816b157fe9b19611691d748c7b693824ae882d3b3e3db74ddbf02c8d12" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.394070 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad7b70f7-4a24-4ecf-825b-29383cc2b01e" (UID: "ad7b70f7-4a24-4ecf-825b-29383cc2b01e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.406211 4751 scope.go:117] "RemoveContainer" containerID="f17170448b693703ee174a9d749a77d124f9d6b30e425ad03edc67f93a29f5be" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.445862 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.465500 4751 scope.go:117] "RemoveContainer" containerID="4be112339abc0ac5a431288d01257e3e3f98ce652cfb9dc78b3e90a13c956b45" Jan 30 22:35:13 crc kubenswrapper[4751]: E0130 22:35:13.470459 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4be112339abc0ac5a431288d01257e3e3f98ce652cfb9dc78b3e90a13c956b45\": container with ID starting with 4be112339abc0ac5a431288d01257e3e3f98ce652cfb9dc78b3e90a13c956b45 not found: ID does not exist" containerID="4be112339abc0ac5a431288d01257e3e3f98ce652cfb9dc78b3e90a13c956b45" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.470628 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4be112339abc0ac5a431288d01257e3e3f98ce652cfb9dc78b3e90a13c956b45"} err="failed to get container status \"4be112339abc0ac5a431288d01257e3e3f98ce652cfb9dc78b3e90a13c956b45\": rpc error: code = NotFound desc = could not find container \"4be112339abc0ac5a431288d01257e3e3f98ce652cfb9dc78b3e90a13c956b45\": container with ID starting with 4be112339abc0ac5a431288d01257e3e3f98ce652cfb9dc78b3e90a13c956b45 not found: ID does not exist" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.470712 4751 scope.go:117] "RemoveContainer" containerID="b06670816b157fe9b19611691d748c7b693824ae882d3b3e3db74ddbf02c8d12" Jan 30 22:35:13 crc kubenswrapper[4751]: E0130 22:35:13.471715 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b06670816b157fe9b19611691d748c7b693824ae882d3b3e3db74ddbf02c8d12\": container with ID starting with b06670816b157fe9b19611691d748c7b693824ae882d3b3e3db74ddbf02c8d12 not found: ID does not exist" containerID="b06670816b157fe9b19611691d748c7b693824ae882d3b3e3db74ddbf02c8d12" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.471754 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b06670816b157fe9b19611691d748c7b693824ae882d3b3e3db74ddbf02c8d12"} err="failed to get container status \"b06670816b157fe9b19611691d748c7b693824ae882d3b3e3db74ddbf02c8d12\": rpc error: code = NotFound desc = could not find container \"b06670816b157fe9b19611691d748c7b693824ae882d3b3e3db74ddbf02c8d12\": container with ID starting with b06670816b157fe9b19611691d748c7b693824ae882d3b3e3db74ddbf02c8d12 not found: ID does not exist" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.471778 4751 scope.go:117] "RemoveContainer" containerID="f17170448b693703ee174a9d749a77d124f9d6b30e425ad03edc67f93a29f5be" Jan 30 22:35:13 crc kubenswrapper[4751]: E0130 22:35:13.472213 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f17170448b693703ee174a9d749a77d124f9d6b30e425ad03edc67f93a29f5be\": container with ID starting with f17170448b693703ee174a9d749a77d124f9d6b30e425ad03edc67f93a29f5be not found: ID does not exist" containerID="f17170448b693703ee174a9d749a77d124f9d6b30e425ad03edc67f93a29f5be" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.472433 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f17170448b693703ee174a9d749a77d124f9d6b30e425ad03edc67f93a29f5be"} err="failed to get container status \"f17170448b693703ee174a9d749a77d124f9d6b30e425ad03edc67f93a29f5be\": rpc error: code = NotFound desc = could not find container \"f17170448b693703ee174a9d749a77d124f9d6b30e425ad03edc67f93a29f5be\": container with ID starting with f17170448b693703ee174a9d749a77d124f9d6b30e425ad03edc67f93a29f5be not found: ID does not exist" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.687732 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mmdjh"] Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.708154 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mmdjh"] Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.989053 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" path="/var/lib/kubelet/pods/ad7b70f7-4a24-4ecf-825b-29383cc2b01e/volumes" Jan 30 22:35:54 crc kubenswrapper[4751]: I0130 22:35:54.128569 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:35:54 crc kubenswrapper[4751]: I0130 22:35:54.130290 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:36:24 crc kubenswrapper[4751]: I0130 22:36:24.126853 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:36:24 crc kubenswrapper[4751]: I0130 22:36:24.127603 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:36:54 crc kubenswrapper[4751]: I0130 22:36:54.126446 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:36:54 crc kubenswrapper[4751]: I0130 22:36:54.127052 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:36:54 crc kubenswrapper[4751]: I0130 22:36:54.127108 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 22:36:54 crc kubenswrapper[4751]: I0130 22:36:54.128129 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:36:54 crc kubenswrapper[4751]: I0130 22:36:54.128312 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" gracePeriod=600 Jan 30 22:36:54 crc kubenswrapper[4751]: E0130 22:36:54.252006 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:36:54 crc kubenswrapper[4751]: I0130 22:36:54.383512 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" exitCode=0 Jan 30 22:36:54 crc kubenswrapper[4751]: I0130 22:36:54.383572 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd"} Jan 30 22:36:54 crc kubenswrapper[4751]: I0130 22:36:54.383627 4751 scope.go:117] "RemoveContainer" containerID="99d2a0709014fe5031de012673bab8841bfdcebadd3c614ac1c6d9e193438395" Jan 30 22:36:54 crc kubenswrapper[4751]: I0130 22:36:54.384165 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:36:54 crc kubenswrapper[4751]: E0130 22:36:54.384577 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:37:05 crc kubenswrapper[4751]: I0130 22:37:05.976499 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:37:05 crc kubenswrapper[4751]: E0130 22:37:05.977303 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:37:16 crc kubenswrapper[4751]: I0130 22:37:16.976033 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:37:16 crc kubenswrapper[4751]: E0130 22:37:16.976797 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:37:28 crc kubenswrapper[4751]: I0130 22:37:28.975993 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:37:28 crc kubenswrapper[4751]: E0130 22:37:28.977260 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:37:40 crc kubenswrapper[4751]: I0130 22:37:40.977842 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:37:40 crc kubenswrapper[4751]: E0130 22:37:40.978647 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:37:54 crc kubenswrapper[4751]: I0130 22:37:54.976168 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:37:54 crc kubenswrapper[4751]: E0130 22:37:54.976994 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:38:07 crc kubenswrapper[4751]: I0130 22:38:07.976338 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:38:07 crc kubenswrapper[4751]: E0130 22:38:07.977483 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:38:19 crc kubenswrapper[4751]: I0130 22:38:19.976131 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:38:19 crc kubenswrapper[4751]: E0130 22:38:19.976923 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:38:33 crc kubenswrapper[4751]: I0130 22:38:33.976026 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:38:33 crc kubenswrapper[4751]: E0130 22:38:33.976866 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:38:45 crc kubenswrapper[4751]: I0130 22:38:45.977441 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:38:45 crc kubenswrapper[4751]: E0130 22:38:45.978937 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:38:58 crc kubenswrapper[4751]: I0130 22:38:58.975761 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:38:58 crc kubenswrapper[4751]: E0130 22:38:58.976550 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:39:12 crc kubenswrapper[4751]: I0130 22:39:12.976109 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:39:12 crc kubenswrapper[4751]: E0130 22:39:12.976881 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:39:23 crc kubenswrapper[4751]: I0130 22:39:23.975809 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:39:23 crc kubenswrapper[4751]: E0130 22:39:23.976764 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:39:38 crc kubenswrapper[4751]: I0130 22:39:38.976804 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:39:38 crc kubenswrapper[4751]: E0130 22:39:38.977864 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:39:53 crc kubenswrapper[4751]: I0130 22:39:53.977047 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:39:53 crc kubenswrapper[4751]: E0130 22:39:53.977762 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:40:08 crc kubenswrapper[4751]: I0130 22:40:08.976542 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:40:08 crc kubenswrapper[4751]: E0130 22:40:08.978011 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:40:23 crc kubenswrapper[4751]: I0130 22:40:23.976508 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:40:23 crc kubenswrapper[4751]: E0130 22:40:23.977355 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:40:35 crc kubenswrapper[4751]: I0130 22:40:35.978287 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:40:35 crc kubenswrapper[4751]: E0130 22:40:35.979355 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:40:43 crc kubenswrapper[4751]: I0130 22:40:43.945106 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mx86x"] Jan 30 22:40:43 crc kubenswrapper[4751]: E0130 22:40:43.950982 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="extract-content" Jan 30 22:40:43 crc kubenswrapper[4751]: I0130 22:40:43.951082 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="extract-content" Jan 30 22:40:43 crc kubenswrapper[4751]: E0130 22:40:43.951122 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328960d4-cdf9-4134-b966-af48db38c682" containerName="extract-content" Jan 30 22:40:43 crc kubenswrapper[4751]: I0130 22:40:43.951131 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="328960d4-cdf9-4134-b966-af48db38c682" containerName="extract-content" Jan 30 22:40:43 crc kubenswrapper[4751]: E0130 22:40:43.951152 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328960d4-cdf9-4134-b966-af48db38c682" containerName="extract-utilities" Jan 30 22:40:43 crc kubenswrapper[4751]: I0130 22:40:43.951160 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="328960d4-cdf9-4134-b966-af48db38c682" containerName="extract-utilities" Jan 30 22:40:43 crc kubenswrapper[4751]: E0130 22:40:43.951172 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="extract-utilities" Jan 30 22:40:43 crc kubenswrapper[4751]: I0130 22:40:43.951180 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="extract-utilities" Jan 30 22:40:43 crc kubenswrapper[4751]: E0130 22:40:43.951198 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="registry-server" Jan 30 22:40:43 crc kubenswrapper[4751]: I0130 22:40:43.951215 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="registry-server" Jan 30 22:40:43 crc kubenswrapper[4751]: E0130 22:40:43.951242 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328960d4-cdf9-4134-b966-af48db38c682" containerName="registry-server" Jan 30 22:40:43 crc kubenswrapper[4751]: I0130 22:40:43.951252 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="328960d4-cdf9-4134-b966-af48db38c682" containerName="registry-server" Jan 30 22:40:43 crc kubenswrapper[4751]: I0130 22:40:43.952755 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="registry-server" Jan 30 22:40:43 crc kubenswrapper[4751]: I0130 22:40:43.952793 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="328960d4-cdf9-4134-b966-af48db38c682" containerName="registry-server" Jan 30 22:40:43 crc kubenswrapper[4751]: I0130 22:40:43.959289 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:40:44 crc kubenswrapper[4751]: I0130 22:40:44.024555 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mx86x"] Jan 30 22:40:44 crc kubenswrapper[4751]: I0130 22:40:44.079717 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-catalog-content\") pod \"community-operators-mx86x\" (UID: \"13f26c61-3909-4cab-9603-935ea3e141f7\") " pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:40:44 crc kubenswrapper[4751]: I0130 22:40:44.080035 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzzg6\" (UniqueName: \"kubernetes.io/projected/13f26c61-3909-4cab-9603-935ea3e141f7-kube-api-access-pzzg6\") pod \"community-operators-mx86x\" (UID: \"13f26c61-3909-4cab-9603-935ea3e141f7\") " pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:40:44 crc kubenswrapper[4751]: I0130 22:40:44.080201 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-utilities\") pod \"community-operators-mx86x\" (UID: \"13f26c61-3909-4cab-9603-935ea3e141f7\") " pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:40:44 crc kubenswrapper[4751]: I0130 22:40:44.184917 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-catalog-content\") pod \"community-operators-mx86x\" (UID: \"13f26c61-3909-4cab-9603-935ea3e141f7\") " pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:40:44 crc kubenswrapper[4751]: I0130 22:40:44.185282 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzzg6\" (UniqueName: \"kubernetes.io/projected/13f26c61-3909-4cab-9603-935ea3e141f7-kube-api-access-pzzg6\") pod \"community-operators-mx86x\" (UID: \"13f26c61-3909-4cab-9603-935ea3e141f7\") " pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:40:44 crc kubenswrapper[4751]: I0130 22:40:44.185545 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-utilities\") pod \"community-operators-mx86x\" (UID: \"13f26c61-3909-4cab-9603-935ea3e141f7\") " pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:40:44 crc kubenswrapper[4751]: I0130 22:40:44.187694 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-catalog-content\") pod \"community-operators-mx86x\" (UID: \"13f26c61-3909-4cab-9603-935ea3e141f7\") " pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:40:44 crc kubenswrapper[4751]: I0130 22:40:44.187868 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-utilities\") pod \"community-operators-mx86x\" (UID: \"13f26c61-3909-4cab-9603-935ea3e141f7\") " pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:40:44 crc kubenswrapper[4751]: I0130 22:40:44.218122 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzzg6\" (UniqueName: \"kubernetes.io/projected/13f26c61-3909-4cab-9603-935ea3e141f7-kube-api-access-pzzg6\") pod \"community-operators-mx86x\" (UID: \"13f26c61-3909-4cab-9603-935ea3e141f7\") " pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:40:44 crc kubenswrapper[4751]: I0130 22:40:44.292369 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:40:45 crc kubenswrapper[4751]: I0130 22:40:45.473047 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mx86x"] Jan 30 22:40:45 crc kubenswrapper[4751]: I0130 22:40:45.892123 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mx86x" event={"ID":"13f26c61-3909-4cab-9603-935ea3e141f7","Type":"ContainerStarted","Data":"0caa71c35533108c2603378372f1b0bcf3887c19290e714633a57af4fbebb8fe"} Jan 30 22:40:46 crc kubenswrapper[4751]: I0130 22:40:46.905932 4751 generic.go:334] "Generic (PLEG): container finished" podID="13f26c61-3909-4cab-9603-935ea3e141f7" containerID="f5155d06487934324c511dfefc7974b9968f8e07f2f9010ae8f2977a0d8743e4" exitCode=0 Jan 30 22:40:46 crc kubenswrapper[4751]: I0130 22:40:46.906226 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mx86x" event={"ID":"13f26c61-3909-4cab-9603-935ea3e141f7","Type":"ContainerDied","Data":"f5155d06487934324c511dfefc7974b9968f8e07f2f9010ae8f2977a0d8743e4"} Jan 30 22:40:46 crc kubenswrapper[4751]: I0130 22:40:46.912339 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:40:48 crc kubenswrapper[4751]: I0130 22:40:48.947161 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mx86x" event={"ID":"13f26c61-3909-4cab-9603-935ea3e141f7","Type":"ContainerStarted","Data":"4ef1e2e151d7589c123a15b0534af79255baa7278a8d12ef82e5c289b2792e28"} Jan 30 22:40:48 crc kubenswrapper[4751]: I0130 22:40:48.977194 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:40:48 crc kubenswrapper[4751]: E0130 22:40:48.977943 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:40:51 crc kubenswrapper[4751]: I0130 22:40:51.003470 4751 generic.go:334] "Generic (PLEG): container finished" podID="13f26c61-3909-4cab-9603-935ea3e141f7" containerID="4ef1e2e151d7589c123a15b0534af79255baa7278a8d12ef82e5c289b2792e28" exitCode=0 Jan 30 22:40:51 crc kubenswrapper[4751]: I0130 22:40:51.003553 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mx86x" event={"ID":"13f26c61-3909-4cab-9603-935ea3e141f7","Type":"ContainerDied","Data":"4ef1e2e151d7589c123a15b0534af79255baa7278a8d12ef82e5c289b2792e28"} Jan 30 22:40:52 crc kubenswrapper[4751]: I0130 22:40:52.018790 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mx86x" event={"ID":"13f26c61-3909-4cab-9603-935ea3e141f7","Type":"ContainerStarted","Data":"fc93eee53f89db0e5ec9b07864cd511e729e869c7c9586164932a52db692b5c9"} Jan 30 22:40:52 crc kubenswrapper[4751]: I0130 22:40:52.055342 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mx86x" podStartSLOduration=4.472165214 podStartE2EDuration="9.055289436s" podCreationTimestamp="2026-01-30 22:40:43 +0000 UTC" firstStartedPulling="2026-01-30 22:40:46.908230347 +0000 UTC m=+5185.654052996" lastFinishedPulling="2026-01-30 22:40:51.491354569 +0000 UTC m=+5190.237177218" observedRunningTime="2026-01-30 22:40:52.043961764 +0000 UTC m=+5190.789784413" watchObservedRunningTime="2026-01-30 22:40:52.055289436 +0000 UTC m=+5190.801112085" Jan 30 22:40:54 crc kubenswrapper[4751]: I0130 22:40:54.293260 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:40:54 crc kubenswrapper[4751]: I0130 22:40:54.294484 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:40:55 crc kubenswrapper[4751]: I0130 22:40:55.346571 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-mx86x" podUID="13f26c61-3909-4cab-9603-935ea3e141f7" containerName="registry-server" probeResult="failure" output=< Jan 30 22:40:55 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:40:55 crc kubenswrapper[4751]: > Jan 30 22:41:00 crc kubenswrapper[4751]: I0130 22:41:00.976753 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:41:00 crc kubenswrapper[4751]: E0130 22:41:00.977674 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:41:05 crc kubenswrapper[4751]: I0130 22:41:05.353260 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-mx86x" podUID="13f26c61-3909-4cab-9603-935ea3e141f7" containerName="registry-server" probeResult="failure" output=< Jan 30 22:41:05 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:41:05 crc kubenswrapper[4751]: > Jan 30 22:41:12 crc kubenswrapper[4751]: I0130 22:41:12.975920 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:41:12 crc kubenswrapper[4751]: E0130 22:41:12.976721 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:41:14 crc kubenswrapper[4751]: I0130 22:41:14.341016 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:41:14 crc kubenswrapper[4751]: I0130 22:41:14.393913 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:41:15 crc kubenswrapper[4751]: I0130 22:41:15.130798 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mx86x"] Jan 30 22:41:16 crc kubenswrapper[4751]: I0130 22:41:16.283957 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mx86x" podUID="13f26c61-3909-4cab-9603-935ea3e141f7" containerName="registry-server" containerID="cri-o://fc93eee53f89db0e5ec9b07864cd511e729e869c7c9586164932a52db692b5c9" gracePeriod=2 Jan 30 22:41:16 crc kubenswrapper[4751]: I0130 22:41:16.997863 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.084915 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-catalog-content\") pod \"13f26c61-3909-4cab-9603-935ea3e141f7\" (UID: \"13f26c61-3909-4cab-9603-935ea3e141f7\") " Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.085010 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzzg6\" (UniqueName: \"kubernetes.io/projected/13f26c61-3909-4cab-9603-935ea3e141f7-kube-api-access-pzzg6\") pod \"13f26c61-3909-4cab-9603-935ea3e141f7\" (UID: \"13f26c61-3909-4cab-9603-935ea3e141f7\") " Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.085113 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-utilities\") pod \"13f26c61-3909-4cab-9603-935ea3e141f7\" (UID: \"13f26c61-3909-4cab-9603-935ea3e141f7\") " Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.089992 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-utilities" (OuterVolumeSpecName: "utilities") pod "13f26c61-3909-4cab-9603-935ea3e141f7" (UID: "13f26c61-3909-4cab-9603-935ea3e141f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.102778 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13f26c61-3909-4cab-9603-935ea3e141f7-kube-api-access-pzzg6" (OuterVolumeSpecName: "kube-api-access-pzzg6") pod "13f26c61-3909-4cab-9603-935ea3e141f7" (UID: "13f26c61-3909-4cab-9603-935ea3e141f7"). InnerVolumeSpecName "kube-api-access-pzzg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.188143 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13f26c61-3909-4cab-9603-935ea3e141f7" (UID: "13f26c61-3909-4cab-9603-935ea3e141f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.188703 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-catalog-content\") pod \"13f26c61-3909-4cab-9603-935ea3e141f7\" (UID: \"13f26c61-3909-4cab-9603-935ea3e141f7\") " Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.189474 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzzg6\" (UniqueName: \"kubernetes.io/projected/13f26c61-3909-4cab-9603-935ea3e141f7-kube-api-access-pzzg6\") on node \"crc\" DevicePath \"\"" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.189491 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:41:17 crc kubenswrapper[4751]: W0130 22:41:17.190667 4751 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/13f26c61-3909-4cab-9603-935ea3e141f7/volumes/kubernetes.io~empty-dir/catalog-content Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.190694 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13f26c61-3909-4cab-9603-935ea3e141f7" (UID: "13f26c61-3909-4cab-9603-935ea3e141f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.303277 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.307894 4751 generic.go:334] "Generic (PLEG): container finished" podID="13f26c61-3909-4cab-9603-935ea3e141f7" containerID="fc93eee53f89db0e5ec9b07864cd511e729e869c7c9586164932a52db692b5c9" exitCode=0 Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.307936 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mx86x" event={"ID":"13f26c61-3909-4cab-9603-935ea3e141f7","Type":"ContainerDied","Data":"fc93eee53f89db0e5ec9b07864cd511e729e869c7c9586164932a52db692b5c9"} Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.307965 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mx86x" event={"ID":"13f26c61-3909-4cab-9603-935ea3e141f7","Type":"ContainerDied","Data":"0caa71c35533108c2603378372f1b0bcf3887c19290e714633a57af4fbebb8fe"} Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.307982 4751 scope.go:117] "RemoveContainer" containerID="fc93eee53f89db0e5ec9b07864cd511e729e869c7c9586164932a52db692b5c9" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.309021 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.351295 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mx86x"] Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.362916 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mx86x"] Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.365049 4751 scope.go:117] "RemoveContainer" containerID="4ef1e2e151d7589c123a15b0534af79255baa7278a8d12ef82e5c289b2792e28" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.392572 4751 scope.go:117] "RemoveContainer" containerID="f5155d06487934324c511dfefc7974b9968f8e07f2f9010ae8f2977a0d8743e4" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.456943 4751 scope.go:117] "RemoveContainer" containerID="fc93eee53f89db0e5ec9b07864cd511e729e869c7c9586164932a52db692b5c9" Jan 30 22:41:17 crc kubenswrapper[4751]: E0130 22:41:17.460247 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc93eee53f89db0e5ec9b07864cd511e729e869c7c9586164932a52db692b5c9\": container with ID starting with fc93eee53f89db0e5ec9b07864cd511e729e869c7c9586164932a52db692b5c9 not found: ID does not exist" containerID="fc93eee53f89db0e5ec9b07864cd511e729e869c7c9586164932a52db692b5c9" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.460302 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc93eee53f89db0e5ec9b07864cd511e729e869c7c9586164932a52db692b5c9"} err="failed to get container status \"fc93eee53f89db0e5ec9b07864cd511e729e869c7c9586164932a52db692b5c9\": rpc error: code = NotFound desc = could not find container \"fc93eee53f89db0e5ec9b07864cd511e729e869c7c9586164932a52db692b5c9\": container with ID starting with fc93eee53f89db0e5ec9b07864cd511e729e869c7c9586164932a52db692b5c9 not found: ID does not exist" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.460370 4751 scope.go:117] "RemoveContainer" containerID="4ef1e2e151d7589c123a15b0534af79255baa7278a8d12ef82e5c289b2792e28" Jan 30 22:41:17 crc kubenswrapper[4751]: E0130 22:41:17.460964 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ef1e2e151d7589c123a15b0534af79255baa7278a8d12ef82e5c289b2792e28\": container with ID starting with 4ef1e2e151d7589c123a15b0534af79255baa7278a8d12ef82e5c289b2792e28 not found: ID does not exist" containerID="4ef1e2e151d7589c123a15b0534af79255baa7278a8d12ef82e5c289b2792e28" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.461121 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ef1e2e151d7589c123a15b0534af79255baa7278a8d12ef82e5c289b2792e28"} err="failed to get container status \"4ef1e2e151d7589c123a15b0534af79255baa7278a8d12ef82e5c289b2792e28\": rpc error: code = NotFound desc = could not find container \"4ef1e2e151d7589c123a15b0534af79255baa7278a8d12ef82e5c289b2792e28\": container with ID starting with 4ef1e2e151d7589c123a15b0534af79255baa7278a8d12ef82e5c289b2792e28 not found: ID does not exist" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.461354 4751 scope.go:117] "RemoveContainer" containerID="f5155d06487934324c511dfefc7974b9968f8e07f2f9010ae8f2977a0d8743e4" Jan 30 22:41:17 crc kubenswrapper[4751]: E0130 22:41:17.461804 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5155d06487934324c511dfefc7974b9968f8e07f2f9010ae8f2977a0d8743e4\": container with ID starting with f5155d06487934324c511dfefc7974b9968f8e07f2f9010ae8f2977a0d8743e4 not found: ID does not exist" containerID="f5155d06487934324c511dfefc7974b9968f8e07f2f9010ae8f2977a0d8743e4" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.461829 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5155d06487934324c511dfefc7974b9968f8e07f2f9010ae8f2977a0d8743e4"} err="failed to get container status \"f5155d06487934324c511dfefc7974b9968f8e07f2f9010ae8f2977a0d8743e4\": rpc error: code = NotFound desc = could not find container \"f5155d06487934324c511dfefc7974b9968f8e07f2f9010ae8f2977a0d8743e4\": container with ID starting with f5155d06487934324c511dfefc7974b9968f8e07f2f9010ae8f2977a0d8743e4 not found: ID does not exist" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.992490 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13f26c61-3909-4cab-9603-935ea3e141f7" path="/var/lib/kubelet/pods/13f26c61-3909-4cab-9603-935ea3e141f7/volumes" Jan 30 22:41:25 crc kubenswrapper[4751]: I0130 22:41:25.978785 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:41:25 crc kubenswrapper[4751]: E0130 22:41:25.980053 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:41:36 crc kubenswrapper[4751]: I0130 22:41:36.975920 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:41:36 crc kubenswrapper[4751]: E0130 22:41:36.976710 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:41:47 crc kubenswrapper[4751]: I0130 22:41:47.976816 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:41:47 crc kubenswrapper[4751]: E0130 22:41:47.977597 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:42:02 crc kubenswrapper[4751]: I0130 22:42:02.976632 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:42:03 crc kubenswrapper[4751]: I0130 22:42:03.836510 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"f46db10d2da49faae0086076a1a33f5d3a22e7c6010009d90d2a34188dcd0e33"} Jan 30 22:43:52 crc kubenswrapper[4751]: I0130 22:43:52.850783 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cx2g4"] Jan 30 22:43:52 crc kubenswrapper[4751]: E0130 22:43:52.851728 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13f26c61-3909-4cab-9603-935ea3e141f7" containerName="registry-server" Jan 30 22:43:52 crc kubenswrapper[4751]: I0130 22:43:52.851741 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f26c61-3909-4cab-9603-935ea3e141f7" containerName="registry-server" Jan 30 22:43:52 crc kubenswrapper[4751]: E0130 22:43:52.851773 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13f26c61-3909-4cab-9603-935ea3e141f7" containerName="extract-content" Jan 30 22:43:52 crc kubenswrapper[4751]: I0130 22:43:52.851781 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f26c61-3909-4cab-9603-935ea3e141f7" containerName="extract-content" Jan 30 22:43:52 crc kubenswrapper[4751]: E0130 22:43:52.851798 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13f26c61-3909-4cab-9603-935ea3e141f7" containerName="extract-utilities" Jan 30 22:43:52 crc kubenswrapper[4751]: I0130 22:43:52.851806 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f26c61-3909-4cab-9603-935ea3e141f7" containerName="extract-utilities" Jan 30 22:43:52 crc kubenswrapper[4751]: I0130 22:43:52.852007 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="13f26c61-3909-4cab-9603-935ea3e141f7" containerName="registry-server" Jan 30 22:43:52 crc kubenswrapper[4751]: I0130 22:43:52.853700 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:43:52 crc kubenswrapper[4751]: I0130 22:43:52.881426 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cx2g4"] Jan 30 22:43:52 crc kubenswrapper[4751]: I0130 22:43:52.938112 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbtmz\" (UniqueName: \"kubernetes.io/projected/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-kube-api-access-jbtmz\") pod \"redhat-operators-cx2g4\" (UID: \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\") " pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:43:52 crc kubenswrapper[4751]: I0130 22:43:52.938168 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-catalog-content\") pod \"redhat-operators-cx2g4\" (UID: \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\") " pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:43:52 crc kubenswrapper[4751]: I0130 22:43:52.938490 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-utilities\") pod \"redhat-operators-cx2g4\" (UID: \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\") " pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:43:53 crc kubenswrapper[4751]: I0130 22:43:53.041039 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-utilities\") pod \"redhat-operators-cx2g4\" (UID: \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\") " pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:43:53 crc kubenswrapper[4751]: I0130 22:43:53.041192 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbtmz\" (UniqueName: \"kubernetes.io/projected/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-kube-api-access-jbtmz\") pod \"redhat-operators-cx2g4\" (UID: \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\") " pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:43:53 crc kubenswrapper[4751]: I0130 22:43:53.041221 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-catalog-content\") pod \"redhat-operators-cx2g4\" (UID: \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\") " pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:43:53 crc kubenswrapper[4751]: I0130 22:43:53.041828 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-catalog-content\") pod \"redhat-operators-cx2g4\" (UID: \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\") " pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:43:53 crc kubenswrapper[4751]: I0130 22:43:53.042262 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-utilities\") pod \"redhat-operators-cx2g4\" (UID: \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\") " pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:43:53 crc kubenswrapper[4751]: I0130 22:43:53.063461 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbtmz\" (UniqueName: \"kubernetes.io/projected/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-kube-api-access-jbtmz\") pod \"redhat-operators-cx2g4\" (UID: \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\") " pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:43:53 crc kubenswrapper[4751]: I0130 22:43:53.190746 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:43:53 crc kubenswrapper[4751]: I0130 22:43:53.733349 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cx2g4"] Jan 30 22:43:54 crc kubenswrapper[4751]: I0130 22:43:54.055787 4751 generic.go:334] "Generic (PLEG): container finished" podID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerID="373144bf5c58c4afd5940b51711edc9b1c3ea33170109e30e8dc7ff0d4b37986" exitCode=0 Jan 30 22:43:54 crc kubenswrapper[4751]: I0130 22:43:54.056109 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cx2g4" event={"ID":"d95f56e2-6bf3-45be-8cb9-bdfe109d5305","Type":"ContainerDied","Data":"373144bf5c58c4afd5940b51711edc9b1c3ea33170109e30e8dc7ff0d4b37986"} Jan 30 22:43:54 crc kubenswrapper[4751]: I0130 22:43:54.056143 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cx2g4" event={"ID":"d95f56e2-6bf3-45be-8cb9-bdfe109d5305","Type":"ContainerStarted","Data":"45b2720c7ea825893558acc60658f948a51ad8ce272f3d92c31ad58ff23c7742"} Jan 30 22:43:56 crc kubenswrapper[4751]: I0130 22:43:56.077184 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cx2g4" event={"ID":"d95f56e2-6bf3-45be-8cb9-bdfe109d5305","Type":"ContainerStarted","Data":"78509c29b223ad50134d1dd5372db8ab88fbdadbd09a3a1b2556e2727abf2ea6"} Jan 30 22:44:01 crc kubenswrapper[4751]: I0130 22:44:01.132448 4751 generic.go:334] "Generic (PLEG): container finished" podID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerID="78509c29b223ad50134d1dd5372db8ab88fbdadbd09a3a1b2556e2727abf2ea6" exitCode=0 Jan 30 22:44:01 crc kubenswrapper[4751]: I0130 22:44:01.132550 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cx2g4" event={"ID":"d95f56e2-6bf3-45be-8cb9-bdfe109d5305","Type":"ContainerDied","Data":"78509c29b223ad50134d1dd5372db8ab88fbdadbd09a3a1b2556e2727abf2ea6"} Jan 30 22:44:02 crc kubenswrapper[4751]: I0130 22:44:02.144646 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cx2g4" event={"ID":"d95f56e2-6bf3-45be-8cb9-bdfe109d5305","Type":"ContainerStarted","Data":"38bf1dcc85c14c60748f5f383be192283086b1ecc8e6c1e69b860eac488c964a"} Jan 30 22:44:02 crc kubenswrapper[4751]: I0130 22:44:02.162321 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cx2g4" podStartSLOduration=2.678330289 podStartE2EDuration="10.162301843s" podCreationTimestamp="2026-01-30 22:43:52 +0000 UTC" firstStartedPulling="2026-01-30 22:43:54.058621694 +0000 UTC m=+5372.804444343" lastFinishedPulling="2026-01-30 22:44:01.542593248 +0000 UTC m=+5380.288415897" observedRunningTime="2026-01-30 22:44:02.159580478 +0000 UTC m=+5380.905403147" watchObservedRunningTime="2026-01-30 22:44:02.162301843 +0000 UTC m=+5380.908124512" Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.190896 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.191320 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.396341 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k24ns"] Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.411980 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k24ns"] Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.413051 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.515255 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5dd7bf4-a432-4493-ba75-3332bd1796e2-catalog-content\") pod \"certified-operators-k24ns\" (UID: \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\") " pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.515404 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5dd7bf4-a432-4493-ba75-3332bd1796e2-utilities\") pod \"certified-operators-k24ns\" (UID: \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\") " pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.515501 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqz2k\" (UniqueName: \"kubernetes.io/projected/d5dd7bf4-a432-4493-ba75-3332bd1796e2-kube-api-access-pqz2k\") pod \"certified-operators-k24ns\" (UID: \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\") " pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.617315 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5dd7bf4-a432-4493-ba75-3332bd1796e2-catalog-content\") pod \"certified-operators-k24ns\" (UID: \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\") " pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.617446 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5dd7bf4-a432-4493-ba75-3332bd1796e2-utilities\") pod \"certified-operators-k24ns\" (UID: \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\") " pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.617558 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqz2k\" (UniqueName: \"kubernetes.io/projected/d5dd7bf4-a432-4493-ba75-3332bd1796e2-kube-api-access-pqz2k\") pod \"certified-operators-k24ns\" (UID: \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\") " pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.617844 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5dd7bf4-a432-4493-ba75-3332bd1796e2-catalog-content\") pod \"certified-operators-k24ns\" (UID: \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\") " pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.618517 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5dd7bf4-a432-4493-ba75-3332bd1796e2-utilities\") pod \"certified-operators-k24ns\" (UID: \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\") " pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.645402 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqz2k\" (UniqueName: \"kubernetes.io/projected/d5dd7bf4-a432-4493-ba75-3332bd1796e2-kube-api-access-pqz2k\") pod \"certified-operators-k24ns\" (UID: \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\") " pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.746444 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:04 crc kubenswrapper[4751]: I0130 22:44:04.250365 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cx2g4" podUID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerName="registry-server" probeResult="failure" output=< Jan 30 22:44:04 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:44:04 crc kubenswrapper[4751]: > Jan 30 22:44:04 crc kubenswrapper[4751]: I0130 22:44:04.615809 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k24ns"] Jan 30 22:44:05 crc kubenswrapper[4751]: I0130 22:44:05.176540 4751 generic.go:334] "Generic (PLEG): container finished" podID="d5dd7bf4-a432-4493-ba75-3332bd1796e2" containerID="494eb7d5f693312e5229d81ca50f77f8fa6ea43ea3a58094040b4842adc95b57" exitCode=0 Jan 30 22:44:05 crc kubenswrapper[4751]: I0130 22:44:05.176655 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k24ns" event={"ID":"d5dd7bf4-a432-4493-ba75-3332bd1796e2","Type":"ContainerDied","Data":"494eb7d5f693312e5229d81ca50f77f8fa6ea43ea3a58094040b4842adc95b57"} Jan 30 22:44:05 crc kubenswrapper[4751]: I0130 22:44:05.176816 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k24ns" event={"ID":"d5dd7bf4-a432-4493-ba75-3332bd1796e2","Type":"ContainerStarted","Data":"929849532c0253cc3db083fc7dd3f3e972b4f182a69073c5905ab6c91d23ae1d"} Jan 30 22:44:06 crc kubenswrapper[4751]: I0130 22:44:06.191509 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k24ns" event={"ID":"d5dd7bf4-a432-4493-ba75-3332bd1796e2","Type":"ContainerStarted","Data":"e5019237efb28f646ab6ebb27da879fbb15e3823cf21ffcb3f73b12b64e05ff3"} Jan 30 22:44:08 crc kubenswrapper[4751]: I0130 22:44:08.212553 4751 generic.go:334] "Generic (PLEG): container finished" podID="d5dd7bf4-a432-4493-ba75-3332bd1796e2" containerID="e5019237efb28f646ab6ebb27da879fbb15e3823cf21ffcb3f73b12b64e05ff3" exitCode=0 Jan 30 22:44:08 crc kubenswrapper[4751]: I0130 22:44:08.212618 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k24ns" event={"ID":"d5dd7bf4-a432-4493-ba75-3332bd1796e2","Type":"ContainerDied","Data":"e5019237efb28f646ab6ebb27da879fbb15e3823cf21ffcb3f73b12b64e05ff3"} Jan 30 22:44:10 crc kubenswrapper[4751]: I0130 22:44:10.239443 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k24ns" event={"ID":"d5dd7bf4-a432-4493-ba75-3332bd1796e2","Type":"ContainerStarted","Data":"ac87700bb1d5a0e3a99e322a1991bd81a9e300173fce5110526503c94867d56e"} Jan 30 22:44:10 crc kubenswrapper[4751]: I0130 22:44:10.268873 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k24ns" podStartSLOduration=3.8257431669999997 podStartE2EDuration="7.26885052s" podCreationTimestamp="2026-01-30 22:44:03 +0000 UTC" firstStartedPulling="2026-01-30 22:44:05.178189205 +0000 UTC m=+5383.924011854" lastFinishedPulling="2026-01-30 22:44:08.621296558 +0000 UTC m=+5387.367119207" observedRunningTime="2026-01-30 22:44:10.260526842 +0000 UTC m=+5389.006349511" watchObservedRunningTime="2026-01-30 22:44:10.26885052 +0000 UTC m=+5389.014673169" Jan 30 22:44:13 crc kubenswrapper[4751]: I0130 22:44:13.747236 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:13 crc kubenswrapper[4751]: I0130 22:44:13.747895 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:13 crc kubenswrapper[4751]: I0130 22:44:13.808442 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:14 crc kubenswrapper[4751]: I0130 22:44:14.244217 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cx2g4" podUID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerName="registry-server" probeResult="failure" output=< Jan 30 22:44:14 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:44:14 crc kubenswrapper[4751]: > Jan 30 22:44:14 crc kubenswrapper[4751]: I0130 22:44:14.333944 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:14 crc kubenswrapper[4751]: I0130 22:44:14.395832 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k24ns"] Jan 30 22:44:16 crc kubenswrapper[4751]: I0130 22:44:16.303778 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k24ns" podUID="d5dd7bf4-a432-4493-ba75-3332bd1796e2" containerName="registry-server" containerID="cri-o://ac87700bb1d5a0e3a99e322a1991bd81a9e300173fce5110526503c94867d56e" gracePeriod=2 Jan 30 22:44:16 crc kubenswrapper[4751]: I0130 22:44:16.906106 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.084599 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5dd7bf4-a432-4493-ba75-3332bd1796e2-utilities\") pod \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\" (UID: \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\") " Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.084754 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqz2k\" (UniqueName: \"kubernetes.io/projected/d5dd7bf4-a432-4493-ba75-3332bd1796e2-kube-api-access-pqz2k\") pod \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\" (UID: \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\") " Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.084827 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5dd7bf4-a432-4493-ba75-3332bd1796e2-catalog-content\") pod \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\" (UID: \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\") " Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.085406 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5dd7bf4-a432-4493-ba75-3332bd1796e2-utilities" (OuterVolumeSpecName: "utilities") pod "d5dd7bf4-a432-4493-ba75-3332bd1796e2" (UID: "d5dd7bf4-a432-4493-ba75-3332bd1796e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.085873 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5dd7bf4-a432-4493-ba75-3332bd1796e2-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.093451 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5dd7bf4-a432-4493-ba75-3332bd1796e2-kube-api-access-pqz2k" (OuterVolumeSpecName: "kube-api-access-pqz2k") pod "d5dd7bf4-a432-4493-ba75-3332bd1796e2" (UID: "d5dd7bf4-a432-4493-ba75-3332bd1796e2"). InnerVolumeSpecName "kube-api-access-pqz2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.143582 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5dd7bf4-a432-4493-ba75-3332bd1796e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5dd7bf4-a432-4493-ba75-3332bd1796e2" (UID: "d5dd7bf4-a432-4493-ba75-3332bd1796e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.188486 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqz2k\" (UniqueName: \"kubernetes.io/projected/d5dd7bf4-a432-4493-ba75-3332bd1796e2-kube-api-access-pqz2k\") on node \"crc\" DevicePath \"\"" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.188521 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5dd7bf4-a432-4493-ba75-3332bd1796e2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.318348 4751 generic.go:334] "Generic (PLEG): container finished" podID="d5dd7bf4-a432-4493-ba75-3332bd1796e2" containerID="ac87700bb1d5a0e3a99e322a1991bd81a9e300173fce5110526503c94867d56e" exitCode=0 Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.318403 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k24ns" event={"ID":"d5dd7bf4-a432-4493-ba75-3332bd1796e2","Type":"ContainerDied","Data":"ac87700bb1d5a0e3a99e322a1991bd81a9e300173fce5110526503c94867d56e"} Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.318437 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k24ns" event={"ID":"d5dd7bf4-a432-4493-ba75-3332bd1796e2","Type":"ContainerDied","Data":"929849532c0253cc3db083fc7dd3f3e972b4f182a69073c5905ab6c91d23ae1d"} Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.318441 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.318488 4751 scope.go:117] "RemoveContainer" containerID="ac87700bb1d5a0e3a99e322a1991bd81a9e300173fce5110526503c94867d56e" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.349019 4751 scope.go:117] "RemoveContainer" containerID="e5019237efb28f646ab6ebb27da879fbb15e3823cf21ffcb3f73b12b64e05ff3" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.361996 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k24ns"] Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.380317 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k24ns"] Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.386236 4751 scope.go:117] "RemoveContainer" containerID="494eb7d5f693312e5229d81ca50f77f8fa6ea43ea3a58094040b4842adc95b57" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.450213 4751 scope.go:117] "RemoveContainer" containerID="ac87700bb1d5a0e3a99e322a1991bd81a9e300173fce5110526503c94867d56e" Jan 30 22:44:17 crc kubenswrapper[4751]: E0130 22:44:17.451805 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac87700bb1d5a0e3a99e322a1991bd81a9e300173fce5110526503c94867d56e\": container with ID starting with ac87700bb1d5a0e3a99e322a1991bd81a9e300173fce5110526503c94867d56e not found: ID does not exist" containerID="ac87700bb1d5a0e3a99e322a1991bd81a9e300173fce5110526503c94867d56e" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.451843 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac87700bb1d5a0e3a99e322a1991bd81a9e300173fce5110526503c94867d56e"} err="failed to get container status \"ac87700bb1d5a0e3a99e322a1991bd81a9e300173fce5110526503c94867d56e\": rpc error: code = NotFound desc = could not find container \"ac87700bb1d5a0e3a99e322a1991bd81a9e300173fce5110526503c94867d56e\": container with ID starting with ac87700bb1d5a0e3a99e322a1991bd81a9e300173fce5110526503c94867d56e not found: ID does not exist" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.451873 4751 scope.go:117] "RemoveContainer" containerID="e5019237efb28f646ab6ebb27da879fbb15e3823cf21ffcb3f73b12b64e05ff3" Jan 30 22:44:17 crc kubenswrapper[4751]: E0130 22:44:17.452400 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5019237efb28f646ab6ebb27da879fbb15e3823cf21ffcb3f73b12b64e05ff3\": container with ID starting with e5019237efb28f646ab6ebb27da879fbb15e3823cf21ffcb3f73b12b64e05ff3 not found: ID does not exist" containerID="e5019237efb28f646ab6ebb27da879fbb15e3823cf21ffcb3f73b12b64e05ff3" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.452436 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5019237efb28f646ab6ebb27da879fbb15e3823cf21ffcb3f73b12b64e05ff3"} err="failed to get container status \"e5019237efb28f646ab6ebb27da879fbb15e3823cf21ffcb3f73b12b64e05ff3\": rpc error: code = NotFound desc = could not find container \"e5019237efb28f646ab6ebb27da879fbb15e3823cf21ffcb3f73b12b64e05ff3\": container with ID starting with e5019237efb28f646ab6ebb27da879fbb15e3823cf21ffcb3f73b12b64e05ff3 not found: ID does not exist" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.452459 4751 scope.go:117] "RemoveContainer" containerID="494eb7d5f693312e5229d81ca50f77f8fa6ea43ea3a58094040b4842adc95b57" Jan 30 22:44:17 crc kubenswrapper[4751]: E0130 22:44:17.452765 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"494eb7d5f693312e5229d81ca50f77f8fa6ea43ea3a58094040b4842adc95b57\": container with ID starting with 494eb7d5f693312e5229d81ca50f77f8fa6ea43ea3a58094040b4842adc95b57 not found: ID does not exist" containerID="494eb7d5f693312e5229d81ca50f77f8fa6ea43ea3a58094040b4842adc95b57" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.452806 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"494eb7d5f693312e5229d81ca50f77f8fa6ea43ea3a58094040b4842adc95b57"} err="failed to get container status \"494eb7d5f693312e5229d81ca50f77f8fa6ea43ea3a58094040b4842adc95b57\": rpc error: code = NotFound desc = could not find container \"494eb7d5f693312e5229d81ca50f77f8fa6ea43ea3a58094040b4842adc95b57\": container with ID starting with 494eb7d5f693312e5229d81ca50f77f8fa6ea43ea3a58094040b4842adc95b57 not found: ID does not exist" Jan 30 22:44:18 crc kubenswrapper[4751]: I0130 22:44:18.002844 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5dd7bf4-a432-4493-ba75-3332bd1796e2" path="/var/lib/kubelet/pods/d5dd7bf4-a432-4493-ba75-3332bd1796e2/volumes" Jan 30 22:44:24 crc kubenswrapper[4751]: I0130 22:44:24.126421 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:44:24 crc kubenswrapper[4751]: I0130 22:44:24.127022 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:44:24 crc kubenswrapper[4751]: I0130 22:44:24.241254 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cx2g4" podUID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerName="registry-server" probeResult="failure" output=< Jan 30 22:44:24 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:44:24 crc kubenswrapper[4751]: > Jan 30 22:44:33 crc kubenswrapper[4751]: I0130 22:44:33.243119 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:44:33 crc kubenswrapper[4751]: I0130 22:44:33.302022 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:44:34 crc kubenswrapper[4751]: I0130 22:44:34.227990 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cx2g4"] Jan 30 22:44:34 crc kubenswrapper[4751]: I0130 22:44:34.504938 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cx2g4" podUID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerName="registry-server" containerID="cri-o://38bf1dcc85c14c60748f5f383be192283086b1ecc8e6c1e69b860eac488c964a" gracePeriod=2 Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.177744 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.321955 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-utilities\") pod \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\" (UID: \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\") " Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.322148 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-catalog-content\") pod \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\" (UID: \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\") " Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.322173 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbtmz\" (UniqueName: \"kubernetes.io/projected/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-kube-api-access-jbtmz\") pod \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\" (UID: \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\") " Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.323208 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-utilities" (OuterVolumeSpecName: "utilities") pod "d95f56e2-6bf3-45be-8cb9-bdfe109d5305" (UID: "d95f56e2-6bf3-45be-8cb9-bdfe109d5305"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.327650 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-kube-api-access-jbtmz" (OuterVolumeSpecName: "kube-api-access-jbtmz") pod "d95f56e2-6bf3-45be-8cb9-bdfe109d5305" (UID: "d95f56e2-6bf3-45be-8cb9-bdfe109d5305"). InnerVolumeSpecName "kube-api-access-jbtmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.426115 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.426171 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbtmz\" (UniqueName: \"kubernetes.io/projected/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-kube-api-access-jbtmz\") on node \"crc\" DevicePath \"\"" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.462408 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d95f56e2-6bf3-45be-8cb9-bdfe109d5305" (UID: "d95f56e2-6bf3-45be-8cb9-bdfe109d5305"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.518486 4751 generic.go:334] "Generic (PLEG): container finished" podID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerID="38bf1dcc85c14c60748f5f383be192283086b1ecc8e6c1e69b860eac488c964a" exitCode=0 Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.518542 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cx2g4" event={"ID":"d95f56e2-6bf3-45be-8cb9-bdfe109d5305","Type":"ContainerDied","Data":"38bf1dcc85c14c60748f5f383be192283086b1ecc8e6c1e69b860eac488c964a"} Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.518601 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cx2g4" event={"ID":"d95f56e2-6bf3-45be-8cb9-bdfe109d5305","Type":"ContainerDied","Data":"45b2720c7ea825893558acc60658f948a51ad8ce272f3d92c31ad58ff23c7742"} Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.518601 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.518681 4751 scope.go:117] "RemoveContainer" containerID="38bf1dcc85c14c60748f5f383be192283086b1ecc8e6c1e69b860eac488c964a" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.529252 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.549755 4751 scope.go:117] "RemoveContainer" containerID="78509c29b223ad50134d1dd5372db8ab88fbdadbd09a3a1b2556e2727abf2ea6" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.562220 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cx2g4"] Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.573050 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cx2g4"] Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.585001 4751 scope.go:117] "RemoveContainer" containerID="373144bf5c58c4afd5940b51711edc9b1c3ea33170109e30e8dc7ff0d4b37986" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.630176 4751 scope.go:117] "RemoveContainer" containerID="38bf1dcc85c14c60748f5f383be192283086b1ecc8e6c1e69b860eac488c964a" Jan 30 22:44:35 crc kubenswrapper[4751]: E0130 22:44:35.630935 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38bf1dcc85c14c60748f5f383be192283086b1ecc8e6c1e69b860eac488c964a\": container with ID starting with 38bf1dcc85c14c60748f5f383be192283086b1ecc8e6c1e69b860eac488c964a not found: ID does not exist" containerID="38bf1dcc85c14c60748f5f383be192283086b1ecc8e6c1e69b860eac488c964a" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.630964 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38bf1dcc85c14c60748f5f383be192283086b1ecc8e6c1e69b860eac488c964a"} err="failed to get container status \"38bf1dcc85c14c60748f5f383be192283086b1ecc8e6c1e69b860eac488c964a\": rpc error: code = NotFound desc = could not find container \"38bf1dcc85c14c60748f5f383be192283086b1ecc8e6c1e69b860eac488c964a\": container with ID starting with 38bf1dcc85c14c60748f5f383be192283086b1ecc8e6c1e69b860eac488c964a not found: ID does not exist" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.630986 4751 scope.go:117] "RemoveContainer" containerID="78509c29b223ad50134d1dd5372db8ab88fbdadbd09a3a1b2556e2727abf2ea6" Jan 30 22:44:35 crc kubenswrapper[4751]: E0130 22:44:35.631444 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78509c29b223ad50134d1dd5372db8ab88fbdadbd09a3a1b2556e2727abf2ea6\": container with ID starting with 78509c29b223ad50134d1dd5372db8ab88fbdadbd09a3a1b2556e2727abf2ea6 not found: ID does not exist" containerID="78509c29b223ad50134d1dd5372db8ab88fbdadbd09a3a1b2556e2727abf2ea6" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.631464 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78509c29b223ad50134d1dd5372db8ab88fbdadbd09a3a1b2556e2727abf2ea6"} err="failed to get container status \"78509c29b223ad50134d1dd5372db8ab88fbdadbd09a3a1b2556e2727abf2ea6\": rpc error: code = NotFound desc = could not find container \"78509c29b223ad50134d1dd5372db8ab88fbdadbd09a3a1b2556e2727abf2ea6\": container with ID starting with 78509c29b223ad50134d1dd5372db8ab88fbdadbd09a3a1b2556e2727abf2ea6 not found: ID does not exist" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.631476 4751 scope.go:117] "RemoveContainer" containerID="373144bf5c58c4afd5940b51711edc9b1c3ea33170109e30e8dc7ff0d4b37986" Jan 30 22:44:35 crc kubenswrapper[4751]: E0130 22:44:35.631716 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"373144bf5c58c4afd5940b51711edc9b1c3ea33170109e30e8dc7ff0d4b37986\": container with ID starting with 373144bf5c58c4afd5940b51711edc9b1c3ea33170109e30e8dc7ff0d4b37986 not found: ID does not exist" containerID="373144bf5c58c4afd5940b51711edc9b1c3ea33170109e30e8dc7ff0d4b37986" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.631746 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"373144bf5c58c4afd5940b51711edc9b1c3ea33170109e30e8dc7ff0d4b37986"} err="failed to get container status \"373144bf5c58c4afd5940b51711edc9b1c3ea33170109e30e8dc7ff0d4b37986\": rpc error: code = NotFound desc = could not find container \"373144bf5c58c4afd5940b51711edc9b1c3ea33170109e30e8dc7ff0d4b37986\": container with ID starting with 373144bf5c58c4afd5940b51711edc9b1c3ea33170109e30e8dc7ff0d4b37986 not found: ID does not exist" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.994740 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" path="/var/lib/kubelet/pods/d95f56e2-6bf3-45be-8cb9-bdfe109d5305/volumes" Jan 30 22:44:54 crc kubenswrapper[4751]: I0130 22:44:54.127140 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:44:54 crc kubenswrapper[4751]: I0130 22:44:54.127630 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.226645 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln"] Jan 30 22:45:00 crc kubenswrapper[4751]: E0130 22:45:00.227747 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerName="extract-utilities" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.227769 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerName="extract-utilities" Jan 30 22:45:00 crc kubenswrapper[4751]: E0130 22:45:00.227791 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5dd7bf4-a432-4493-ba75-3332bd1796e2" containerName="extract-content" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.227802 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5dd7bf4-a432-4493-ba75-3332bd1796e2" containerName="extract-content" Jan 30 22:45:00 crc kubenswrapper[4751]: E0130 22:45:00.227823 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5dd7bf4-a432-4493-ba75-3332bd1796e2" containerName="registry-server" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.227831 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5dd7bf4-a432-4493-ba75-3332bd1796e2" containerName="registry-server" Jan 30 22:45:00 crc kubenswrapper[4751]: E0130 22:45:00.227880 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerName="registry-server" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.227888 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerName="registry-server" Jan 30 22:45:00 crc kubenswrapper[4751]: E0130 22:45:00.227906 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5dd7bf4-a432-4493-ba75-3332bd1796e2" containerName="extract-utilities" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.227914 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5dd7bf4-a432-4493-ba75-3332bd1796e2" containerName="extract-utilities" Jan 30 22:45:00 crc kubenswrapper[4751]: E0130 22:45:00.227932 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerName="extract-content" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.227940 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerName="extract-content" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.228203 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5dd7bf4-a432-4493-ba75-3332bd1796e2" containerName="registry-server" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.228228 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerName="registry-server" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.229182 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.241660 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln"] Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.253112 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.261133 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.421998 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d82ca308-99f6-4e91-969e-fa3eb429b8fc-secret-volume\") pod \"collect-profiles-29496885-n9nln\" (UID: \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.422403 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d82ca308-99f6-4e91-969e-fa3eb429b8fc-config-volume\") pod \"collect-profiles-29496885-n9nln\" (UID: \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.422639 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbklb\" (UniqueName: \"kubernetes.io/projected/d82ca308-99f6-4e91-969e-fa3eb429b8fc-kube-api-access-mbklb\") pod \"collect-profiles-29496885-n9nln\" (UID: \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.525463 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbklb\" (UniqueName: \"kubernetes.io/projected/d82ca308-99f6-4e91-969e-fa3eb429b8fc-kube-api-access-mbklb\") pod \"collect-profiles-29496885-n9nln\" (UID: \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.525683 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d82ca308-99f6-4e91-969e-fa3eb429b8fc-secret-volume\") pod \"collect-profiles-29496885-n9nln\" (UID: \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.525878 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d82ca308-99f6-4e91-969e-fa3eb429b8fc-config-volume\") pod \"collect-profiles-29496885-n9nln\" (UID: \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.526765 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d82ca308-99f6-4e91-969e-fa3eb429b8fc-config-volume\") pod \"collect-profiles-29496885-n9nln\" (UID: \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.533171 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d82ca308-99f6-4e91-969e-fa3eb429b8fc-secret-volume\") pod \"collect-profiles-29496885-n9nln\" (UID: \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.544132 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbklb\" (UniqueName: \"kubernetes.io/projected/d82ca308-99f6-4e91-969e-fa3eb429b8fc-kube-api-access-mbklb\") pod \"collect-profiles-29496885-n9nln\" (UID: \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.568857 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" Jan 30 22:45:01 crc kubenswrapper[4751]: I0130 22:45:01.042170 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln"] Jan 30 22:45:01 crc kubenswrapper[4751]: I0130 22:45:01.881760 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" event={"ID":"d82ca308-99f6-4e91-969e-fa3eb429b8fc","Type":"ContainerStarted","Data":"a296ff244120e02ca48530e5d9a31814d1ffbf86deffc46137711742d0fb0eb3"} Jan 30 22:45:01 crc kubenswrapper[4751]: I0130 22:45:01.882176 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" event={"ID":"d82ca308-99f6-4e91-969e-fa3eb429b8fc","Type":"ContainerStarted","Data":"3aa379813f0dbc3dd5fc918bd713dab3b7c37eab60aede7f4f9c1cdaf906812c"} Jan 30 22:45:02 crc kubenswrapper[4751]: I0130 22:45:02.894837 4751 generic.go:334] "Generic (PLEG): container finished" podID="d82ca308-99f6-4e91-969e-fa3eb429b8fc" containerID="a296ff244120e02ca48530e5d9a31814d1ffbf86deffc46137711742d0fb0eb3" exitCode=0 Jan 30 22:45:02 crc kubenswrapper[4751]: I0130 22:45:02.895053 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" event={"ID":"d82ca308-99f6-4e91-969e-fa3eb429b8fc","Type":"ContainerDied","Data":"a296ff244120e02ca48530e5d9a31814d1ffbf86deffc46137711742d0fb0eb3"} Jan 30 22:45:04 crc kubenswrapper[4751]: I0130 22:45:04.320233 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" Jan 30 22:45:04 crc kubenswrapper[4751]: I0130 22:45:04.415981 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbklb\" (UniqueName: \"kubernetes.io/projected/d82ca308-99f6-4e91-969e-fa3eb429b8fc-kube-api-access-mbklb\") pod \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\" (UID: \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\") " Jan 30 22:45:04 crc kubenswrapper[4751]: I0130 22:45:04.416051 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d82ca308-99f6-4e91-969e-fa3eb429b8fc-secret-volume\") pod \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\" (UID: \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\") " Jan 30 22:45:04 crc kubenswrapper[4751]: I0130 22:45:04.416112 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d82ca308-99f6-4e91-969e-fa3eb429b8fc-config-volume\") pod \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\" (UID: \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\") " Jan 30 22:45:04 crc kubenswrapper[4751]: I0130 22:45:04.416754 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d82ca308-99f6-4e91-969e-fa3eb429b8fc-config-volume" (OuterVolumeSpecName: "config-volume") pod "d82ca308-99f6-4e91-969e-fa3eb429b8fc" (UID: "d82ca308-99f6-4e91-969e-fa3eb429b8fc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:45:04 crc kubenswrapper[4751]: I0130 22:45:04.417435 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d82ca308-99f6-4e91-969e-fa3eb429b8fc-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:45:04 crc kubenswrapper[4751]: I0130 22:45:04.421541 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d82ca308-99f6-4e91-969e-fa3eb429b8fc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d82ca308-99f6-4e91-969e-fa3eb429b8fc" (UID: "d82ca308-99f6-4e91-969e-fa3eb429b8fc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:45:04 crc kubenswrapper[4751]: I0130 22:45:04.422684 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d82ca308-99f6-4e91-969e-fa3eb429b8fc-kube-api-access-mbklb" (OuterVolumeSpecName: "kube-api-access-mbklb") pod "d82ca308-99f6-4e91-969e-fa3eb429b8fc" (UID: "d82ca308-99f6-4e91-969e-fa3eb429b8fc"). InnerVolumeSpecName "kube-api-access-mbklb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:45:04 crc kubenswrapper[4751]: I0130 22:45:04.518394 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbklb\" (UniqueName: \"kubernetes.io/projected/d82ca308-99f6-4e91-969e-fa3eb429b8fc-kube-api-access-mbklb\") on node \"crc\" DevicePath \"\"" Jan 30 22:45:04 crc kubenswrapper[4751]: I0130 22:45:04.518426 4751 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d82ca308-99f6-4e91-969e-fa3eb429b8fc-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:45:04 crc kubenswrapper[4751]: I0130 22:45:04.918929 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" event={"ID":"d82ca308-99f6-4e91-969e-fa3eb429b8fc","Type":"ContainerDied","Data":"3aa379813f0dbc3dd5fc918bd713dab3b7c37eab60aede7f4f9c1cdaf906812c"} Jan 30 22:45:04 crc kubenswrapper[4751]: I0130 22:45:04.918976 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3aa379813f0dbc3dd5fc918bd713dab3b7c37eab60aede7f4f9c1cdaf906812c" Jan 30 22:45:04 crc kubenswrapper[4751]: I0130 22:45:04.918958 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" Jan 30 22:45:05 crc kubenswrapper[4751]: I0130 22:45:05.402949 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6"] Jan 30 22:45:05 crc kubenswrapper[4751]: I0130 22:45:05.415282 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6"] Jan 30 22:45:05 crc kubenswrapper[4751]: I0130 22:45:05.991405 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87fd180d-a717-4e4f-92fc-e8e77f2d303c" path="/var/lib/kubelet/pods/87fd180d-a717-4e4f-92fc-e8e77f2d303c/volumes" Jan 30 22:45:24 crc kubenswrapper[4751]: I0130 22:45:24.126677 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:45:24 crc kubenswrapper[4751]: I0130 22:45:24.127238 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:45:24 crc kubenswrapper[4751]: I0130 22:45:24.127284 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 22:45:24 crc kubenswrapper[4751]: I0130 22:45:24.128283 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f46db10d2da49faae0086076a1a33f5d3a22e7c6010009d90d2a34188dcd0e33"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:45:24 crc kubenswrapper[4751]: I0130 22:45:24.128366 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://f46db10d2da49faae0086076a1a33f5d3a22e7c6010009d90d2a34188dcd0e33" gracePeriod=600 Jan 30 22:45:25 crc kubenswrapper[4751]: I0130 22:45:25.164544 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="f46db10d2da49faae0086076a1a33f5d3a22e7c6010009d90d2a34188dcd0e33" exitCode=0 Jan 30 22:45:25 crc kubenswrapper[4751]: I0130 22:45:25.164643 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"f46db10d2da49faae0086076a1a33f5d3a22e7c6010009d90d2a34188dcd0e33"} Jan 30 22:45:25 crc kubenswrapper[4751]: I0130 22:45:25.165063 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232"} Jan 30 22:45:25 crc kubenswrapper[4751]: I0130 22:45:25.165087 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:45:29 crc kubenswrapper[4751]: I0130 22:45:29.744310 4751 scope.go:117] "RemoveContainer" containerID="8cb214ecc973d14bc0906a66a17ca4c95d3c39c0cada1250d2a736afa76d1aeb" Jan 30 22:45:49 crc kubenswrapper[4751]: I0130 22:45:49.943805 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kzzg4"] Jan 30 22:45:49 crc kubenswrapper[4751]: E0130 22:45:49.945364 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d82ca308-99f6-4e91-969e-fa3eb429b8fc" containerName="collect-profiles" Jan 30 22:45:49 crc kubenswrapper[4751]: I0130 22:45:49.945390 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d82ca308-99f6-4e91-969e-fa3eb429b8fc" containerName="collect-profiles" Jan 30 22:45:49 crc kubenswrapper[4751]: I0130 22:45:49.945853 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d82ca308-99f6-4e91-969e-fa3eb429b8fc" containerName="collect-profiles" Jan 30 22:45:49 crc kubenswrapper[4751]: I0130 22:45:49.948776 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:45:49 crc kubenswrapper[4751]: I0130 22:45:49.959186 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzzg4"] Jan 30 22:45:50 crc kubenswrapper[4751]: I0130 22:45:50.079801 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-catalog-content\") pod \"redhat-marketplace-kzzg4\" (UID: \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\") " pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:45:50 crc kubenswrapper[4751]: I0130 22:45:50.079928 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-utilities\") pod \"redhat-marketplace-kzzg4\" (UID: \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\") " pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:45:50 crc kubenswrapper[4751]: I0130 22:45:50.080036 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbfbl\" (UniqueName: \"kubernetes.io/projected/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-kube-api-access-hbfbl\") pod \"redhat-marketplace-kzzg4\" (UID: \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\") " pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:45:50 crc kubenswrapper[4751]: I0130 22:45:50.183986 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-catalog-content\") pod \"redhat-marketplace-kzzg4\" (UID: \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\") " pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:45:50 crc kubenswrapper[4751]: I0130 22:45:50.184611 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-catalog-content\") pod \"redhat-marketplace-kzzg4\" (UID: \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\") " pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:45:50 crc kubenswrapper[4751]: I0130 22:45:50.185263 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-utilities\") pod \"redhat-marketplace-kzzg4\" (UID: \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\") " pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:45:50 crc kubenswrapper[4751]: I0130 22:45:50.185315 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-utilities\") pod \"redhat-marketplace-kzzg4\" (UID: \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\") " pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:45:50 crc kubenswrapper[4751]: I0130 22:45:50.185446 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbfbl\" (UniqueName: \"kubernetes.io/projected/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-kube-api-access-hbfbl\") pod \"redhat-marketplace-kzzg4\" (UID: \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\") " pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:45:50 crc kubenswrapper[4751]: I0130 22:45:50.207818 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbfbl\" (UniqueName: \"kubernetes.io/projected/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-kube-api-access-hbfbl\") pod \"redhat-marketplace-kzzg4\" (UID: \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\") " pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:45:50 crc kubenswrapper[4751]: I0130 22:45:50.283833 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:45:50 crc kubenswrapper[4751]: I0130 22:45:50.795741 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzzg4"] Jan 30 22:45:51 crc kubenswrapper[4751]: I0130 22:45:51.449544 4751 generic.go:334] "Generic (PLEG): container finished" podID="70ea4446-1a4d-41dc-a96c-ca1c271f80ff" containerID="a24d8e84afeae39a112c98dadb4292120237f9c5a8d0947f3c1a6831ff14225a" exitCode=0 Jan 30 22:45:51 crc kubenswrapper[4751]: I0130 22:45:51.449645 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzzg4" event={"ID":"70ea4446-1a4d-41dc-a96c-ca1c271f80ff","Type":"ContainerDied","Data":"a24d8e84afeae39a112c98dadb4292120237f9c5a8d0947f3c1a6831ff14225a"} Jan 30 22:45:51 crc kubenswrapper[4751]: I0130 22:45:51.449855 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzzg4" event={"ID":"70ea4446-1a4d-41dc-a96c-ca1c271f80ff","Type":"ContainerStarted","Data":"360768f03c69a0f21e401d2ca0fd8a4048177b362bdd5e4bc3ab30fcf847656e"} Jan 30 22:45:51 crc kubenswrapper[4751]: I0130 22:45:51.453375 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:45:52 crc kubenswrapper[4751]: I0130 22:45:52.474525 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzzg4" event={"ID":"70ea4446-1a4d-41dc-a96c-ca1c271f80ff","Type":"ContainerStarted","Data":"e58afeef5056b0764ab3ea37f8a543eabf134eeb4ffad8b501db243993968da3"} Jan 30 22:45:53 crc kubenswrapper[4751]: I0130 22:45:53.489129 4751 generic.go:334] "Generic (PLEG): container finished" podID="70ea4446-1a4d-41dc-a96c-ca1c271f80ff" containerID="e58afeef5056b0764ab3ea37f8a543eabf134eeb4ffad8b501db243993968da3" exitCode=0 Jan 30 22:45:53 crc kubenswrapper[4751]: I0130 22:45:53.489229 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzzg4" event={"ID":"70ea4446-1a4d-41dc-a96c-ca1c271f80ff","Type":"ContainerDied","Data":"e58afeef5056b0764ab3ea37f8a543eabf134eeb4ffad8b501db243993968da3"} Jan 30 22:45:55 crc kubenswrapper[4751]: I0130 22:45:55.513522 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzzg4" event={"ID":"70ea4446-1a4d-41dc-a96c-ca1c271f80ff","Type":"ContainerStarted","Data":"ea733e7aff846f1027bf3d4f1bede31ff58b392282c7cbc0cbe3c1d0198724f2"} Jan 30 22:45:55 crc kubenswrapper[4751]: I0130 22:45:55.544561 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kzzg4" podStartSLOduration=4.102318896 podStartE2EDuration="6.544366437s" podCreationTimestamp="2026-01-30 22:45:49 +0000 UTC" firstStartedPulling="2026-01-30 22:45:51.452934312 +0000 UTC m=+5490.198756961" lastFinishedPulling="2026-01-30 22:45:53.894981863 +0000 UTC m=+5492.640804502" observedRunningTime="2026-01-30 22:45:55.532922421 +0000 UTC m=+5494.278745090" watchObservedRunningTime="2026-01-30 22:45:55.544366437 +0000 UTC m=+5494.290189096" Jan 30 22:46:00 crc kubenswrapper[4751]: I0130 22:46:00.284572 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:46:00 crc kubenswrapper[4751]: I0130 22:46:00.285232 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:46:00 crc kubenswrapper[4751]: I0130 22:46:00.342053 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:46:00 crc kubenswrapper[4751]: I0130 22:46:00.619909 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:46:00 crc kubenswrapper[4751]: I0130 22:46:00.679191 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzzg4"] Jan 30 22:46:02 crc kubenswrapper[4751]: I0130 22:46:02.582177 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kzzg4" podUID="70ea4446-1a4d-41dc-a96c-ca1c271f80ff" containerName="registry-server" containerID="cri-o://ea733e7aff846f1027bf3d4f1bede31ff58b392282c7cbc0cbe3c1d0198724f2" gracePeriod=2 Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.129160 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.217146 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-utilities\") pod \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\" (UID: \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\") " Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.217200 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-catalog-content\") pod \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\" (UID: \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\") " Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.217264 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbfbl\" (UniqueName: \"kubernetes.io/projected/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-kube-api-access-hbfbl\") pod \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\" (UID: \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\") " Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.218007 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-utilities" (OuterVolumeSpecName: "utilities") pod "70ea4446-1a4d-41dc-a96c-ca1c271f80ff" (UID: "70ea4446-1a4d-41dc-a96c-ca1c271f80ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.218547 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.223584 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-kube-api-access-hbfbl" (OuterVolumeSpecName: "kube-api-access-hbfbl") pod "70ea4446-1a4d-41dc-a96c-ca1c271f80ff" (UID: "70ea4446-1a4d-41dc-a96c-ca1c271f80ff"). InnerVolumeSpecName "kube-api-access-hbfbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.250383 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70ea4446-1a4d-41dc-a96c-ca1c271f80ff" (UID: "70ea4446-1a4d-41dc-a96c-ca1c271f80ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.320829 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.320860 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbfbl\" (UniqueName: \"kubernetes.io/projected/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-kube-api-access-hbfbl\") on node \"crc\" DevicePath \"\"" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.597520 4751 generic.go:334] "Generic (PLEG): container finished" podID="70ea4446-1a4d-41dc-a96c-ca1c271f80ff" containerID="ea733e7aff846f1027bf3d4f1bede31ff58b392282c7cbc0cbe3c1d0198724f2" exitCode=0 Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.597561 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzzg4" event={"ID":"70ea4446-1a4d-41dc-a96c-ca1c271f80ff","Type":"ContainerDied","Data":"ea733e7aff846f1027bf3d4f1bede31ff58b392282c7cbc0cbe3c1d0198724f2"} Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.597592 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzzg4" event={"ID":"70ea4446-1a4d-41dc-a96c-ca1c271f80ff","Type":"ContainerDied","Data":"360768f03c69a0f21e401d2ca0fd8a4048177b362bdd5e4bc3ab30fcf847656e"} Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.597608 4751 scope.go:117] "RemoveContainer" containerID="ea733e7aff846f1027bf3d4f1bede31ff58b392282c7cbc0cbe3c1d0198724f2" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.597626 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.636484 4751 scope.go:117] "RemoveContainer" containerID="e58afeef5056b0764ab3ea37f8a543eabf134eeb4ffad8b501db243993968da3" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.654419 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzzg4"] Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.676348 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzzg4"] Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.686534 4751 scope.go:117] "RemoveContainer" containerID="a24d8e84afeae39a112c98dadb4292120237f9c5a8d0947f3c1a6831ff14225a" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.722369 4751 scope.go:117] "RemoveContainer" containerID="ea733e7aff846f1027bf3d4f1bede31ff58b392282c7cbc0cbe3c1d0198724f2" Jan 30 22:46:03 crc kubenswrapper[4751]: E0130 22:46:03.722933 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea733e7aff846f1027bf3d4f1bede31ff58b392282c7cbc0cbe3c1d0198724f2\": container with ID starting with ea733e7aff846f1027bf3d4f1bede31ff58b392282c7cbc0cbe3c1d0198724f2 not found: ID does not exist" containerID="ea733e7aff846f1027bf3d4f1bede31ff58b392282c7cbc0cbe3c1d0198724f2" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.722997 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea733e7aff846f1027bf3d4f1bede31ff58b392282c7cbc0cbe3c1d0198724f2"} err="failed to get container status \"ea733e7aff846f1027bf3d4f1bede31ff58b392282c7cbc0cbe3c1d0198724f2\": rpc error: code = NotFound desc = could not find container \"ea733e7aff846f1027bf3d4f1bede31ff58b392282c7cbc0cbe3c1d0198724f2\": container with ID starting with ea733e7aff846f1027bf3d4f1bede31ff58b392282c7cbc0cbe3c1d0198724f2 not found: ID does not exist" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.723026 4751 scope.go:117] "RemoveContainer" containerID="e58afeef5056b0764ab3ea37f8a543eabf134eeb4ffad8b501db243993968da3" Jan 30 22:46:03 crc kubenswrapper[4751]: E0130 22:46:03.723279 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e58afeef5056b0764ab3ea37f8a543eabf134eeb4ffad8b501db243993968da3\": container with ID starting with e58afeef5056b0764ab3ea37f8a543eabf134eeb4ffad8b501db243993968da3 not found: ID does not exist" containerID="e58afeef5056b0764ab3ea37f8a543eabf134eeb4ffad8b501db243993968da3" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.723311 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e58afeef5056b0764ab3ea37f8a543eabf134eeb4ffad8b501db243993968da3"} err="failed to get container status \"e58afeef5056b0764ab3ea37f8a543eabf134eeb4ffad8b501db243993968da3\": rpc error: code = NotFound desc = could not find container \"e58afeef5056b0764ab3ea37f8a543eabf134eeb4ffad8b501db243993968da3\": container with ID starting with e58afeef5056b0764ab3ea37f8a543eabf134eeb4ffad8b501db243993968da3 not found: ID does not exist" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.723338 4751 scope.go:117] "RemoveContainer" containerID="a24d8e84afeae39a112c98dadb4292120237f9c5a8d0947f3c1a6831ff14225a" Jan 30 22:46:03 crc kubenswrapper[4751]: E0130 22:46:03.723572 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a24d8e84afeae39a112c98dadb4292120237f9c5a8d0947f3c1a6831ff14225a\": container with ID starting with a24d8e84afeae39a112c98dadb4292120237f9c5a8d0947f3c1a6831ff14225a not found: ID does not exist" containerID="a24d8e84afeae39a112c98dadb4292120237f9c5a8d0947f3c1a6831ff14225a" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.723591 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a24d8e84afeae39a112c98dadb4292120237f9c5a8d0947f3c1a6831ff14225a"} err="failed to get container status \"a24d8e84afeae39a112c98dadb4292120237f9c5a8d0947f3c1a6831ff14225a\": rpc error: code = NotFound desc = could not find container \"a24d8e84afeae39a112c98dadb4292120237f9c5a8d0947f3c1a6831ff14225a\": container with ID starting with a24d8e84afeae39a112c98dadb4292120237f9c5a8d0947f3c1a6831ff14225a not found: ID does not exist" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.991517 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70ea4446-1a4d-41dc-a96c-ca1c271f80ff" path="/var/lib/kubelet/pods/70ea4446-1a4d-41dc-a96c-ca1c271f80ff/volumes" Jan 30 22:47:24 crc kubenswrapper[4751]: I0130 22:47:24.127536 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:47:24 crc kubenswrapper[4751]: I0130 22:47:24.128148 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:47:54 crc kubenswrapper[4751]: I0130 22:47:54.126924 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:47:54 crc kubenswrapper[4751]: I0130 22:47:54.127606 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:48:24 crc kubenswrapper[4751]: I0130 22:48:24.126843 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:48:24 crc kubenswrapper[4751]: I0130 22:48:24.128010 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:48:24 crc kubenswrapper[4751]: I0130 22:48:24.128121 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 22:48:24 crc kubenswrapper[4751]: I0130 22:48:24.129964 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:48:24 crc kubenswrapper[4751]: I0130 22:48:24.130061 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" gracePeriod=600 Jan 30 22:48:24 crc kubenswrapper[4751]: E0130 22:48:24.260291 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:48:25 crc kubenswrapper[4751]: I0130 22:48:25.245597 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" exitCode=0 Jan 30 22:48:25 crc kubenswrapper[4751]: I0130 22:48:25.245704 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232"} Jan 30 22:48:25 crc kubenswrapper[4751]: I0130 22:48:25.246045 4751 scope.go:117] "RemoveContainer" containerID="f46db10d2da49faae0086076a1a33f5d3a22e7c6010009d90d2a34188dcd0e33" Jan 30 22:48:25 crc kubenswrapper[4751]: I0130 22:48:25.247641 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:48:25 crc kubenswrapper[4751]: E0130 22:48:25.248445 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:48:37 crc kubenswrapper[4751]: I0130 22:48:37.975615 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:48:37 crc kubenswrapper[4751]: E0130 22:48:37.976405 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:48:51 crc kubenswrapper[4751]: I0130 22:48:51.986792 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:48:51 crc kubenswrapper[4751]: E0130 22:48:51.987462 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:49:02 crc kubenswrapper[4751]: I0130 22:49:02.976289 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:49:02 crc kubenswrapper[4751]: E0130 22:49:02.977299 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:49:15 crc kubenswrapper[4751]: I0130 22:49:15.977323 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:49:15 crc kubenswrapper[4751]: E0130 22:49:15.978455 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:49:29 crc kubenswrapper[4751]: I0130 22:49:29.976590 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:49:29 crc kubenswrapper[4751]: E0130 22:49:29.977239 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:49:43 crc kubenswrapper[4751]: I0130 22:49:43.287894 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:49:43 crc kubenswrapper[4751]: E0130 22:49:43.300933 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:49:57 crc kubenswrapper[4751]: I0130 22:49:57.975854 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:49:57 crc kubenswrapper[4751]: E0130 22:49:57.977932 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:50:11 crc kubenswrapper[4751]: I0130 22:50:11.985694 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:50:11 crc kubenswrapper[4751]: E0130 22:50:11.986515 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:50:25 crc kubenswrapper[4751]: I0130 22:50:25.976095 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:50:25 crc kubenswrapper[4751]: E0130 22:50:25.976979 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:50:38 crc kubenswrapper[4751]: I0130 22:50:38.976752 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:50:38 crc kubenswrapper[4751]: E0130 22:50:38.977879 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:50:51 crc kubenswrapper[4751]: I0130 22:50:51.984221 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:50:51 crc kubenswrapper[4751]: E0130 22:50:51.985233 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:51:05 crc kubenswrapper[4751]: I0130 22:51:05.976468 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:51:05 crc kubenswrapper[4751]: E0130 22:51:05.977641 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:51:18 crc kubenswrapper[4751]: I0130 22:51:18.976297 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:51:18 crc kubenswrapper[4751]: E0130 22:51:18.977292 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.655712 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fdrt4"] Jan 30 22:51:20 crc kubenswrapper[4751]: E0130 22:51:20.656690 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ea4446-1a4d-41dc-a96c-ca1c271f80ff" containerName="registry-server" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.656706 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ea4446-1a4d-41dc-a96c-ca1c271f80ff" containerName="registry-server" Jan 30 22:51:20 crc kubenswrapper[4751]: E0130 22:51:20.656724 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ea4446-1a4d-41dc-a96c-ca1c271f80ff" containerName="extract-content" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.656730 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ea4446-1a4d-41dc-a96c-ca1c271f80ff" containerName="extract-content" Jan 30 22:51:20 crc kubenswrapper[4751]: E0130 22:51:20.656757 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ea4446-1a4d-41dc-a96c-ca1c271f80ff" containerName="extract-utilities" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.656763 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ea4446-1a4d-41dc-a96c-ca1c271f80ff" containerName="extract-utilities" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.656997 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="70ea4446-1a4d-41dc-a96c-ca1c271f80ff" containerName="registry-server" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.659439 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.672064 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fdrt4"] Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.735670 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-catalog-content\") pod \"community-operators-fdrt4\" (UID: \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\") " pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.735961 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9svz\" (UniqueName: \"kubernetes.io/projected/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-kube-api-access-w9svz\") pod \"community-operators-fdrt4\" (UID: \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\") " pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.736098 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-utilities\") pod \"community-operators-fdrt4\" (UID: \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\") " pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.839623 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-utilities\") pod \"community-operators-fdrt4\" (UID: \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\") " pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.839962 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-catalog-content\") pod \"community-operators-fdrt4\" (UID: \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\") " pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.840117 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9svz\" (UniqueName: \"kubernetes.io/projected/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-kube-api-access-w9svz\") pod \"community-operators-fdrt4\" (UID: \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\") " pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.840836 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-catalog-content\") pod \"community-operators-fdrt4\" (UID: \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\") " pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.841282 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-utilities\") pod \"community-operators-fdrt4\" (UID: \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\") " pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.866593 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9svz\" (UniqueName: \"kubernetes.io/projected/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-kube-api-access-w9svz\") pod \"community-operators-fdrt4\" (UID: \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\") " pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.984574 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:21 crc kubenswrapper[4751]: I0130 22:51:21.482493 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fdrt4"] Jan 30 22:51:21 crc kubenswrapper[4751]: W0130 22:51:21.488818 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcac30a6_10b3_43ee_8e8e_8d2514b3f237.slice/crio-ff664fa803d6f8df0cb6e9976be9f069d58f53ff54f7ff683dbcf4a7928d6602 WatchSource:0}: Error finding container ff664fa803d6f8df0cb6e9976be9f069d58f53ff54f7ff683dbcf4a7928d6602: Status 404 returned error can't find the container with id ff664fa803d6f8df0cb6e9976be9f069d58f53ff54f7ff683dbcf4a7928d6602 Jan 30 22:51:22 crc kubenswrapper[4751]: I0130 22:51:22.406415 4751 generic.go:334] "Generic (PLEG): container finished" podID="fcac30a6-10b3-43ee-8e8e-8d2514b3f237" containerID="d72545f8b3044ed4e0017422a6fae9672f2b0ebd610e6b3b65568b73a0ace171" exitCode=0 Jan 30 22:51:22 crc kubenswrapper[4751]: I0130 22:51:22.406833 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fdrt4" event={"ID":"fcac30a6-10b3-43ee-8e8e-8d2514b3f237","Type":"ContainerDied","Data":"d72545f8b3044ed4e0017422a6fae9672f2b0ebd610e6b3b65568b73a0ace171"} Jan 30 22:51:22 crc kubenswrapper[4751]: I0130 22:51:22.406891 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fdrt4" event={"ID":"fcac30a6-10b3-43ee-8e8e-8d2514b3f237","Type":"ContainerStarted","Data":"ff664fa803d6f8df0cb6e9976be9f069d58f53ff54f7ff683dbcf4a7928d6602"} Jan 30 22:51:22 crc kubenswrapper[4751]: I0130 22:51:22.409239 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:51:24 crc kubenswrapper[4751]: I0130 22:51:24.432251 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fdrt4" event={"ID":"fcac30a6-10b3-43ee-8e8e-8d2514b3f237","Type":"ContainerStarted","Data":"d397daf1f9cb036fa054989b8a679919449dcee523e6dc6bdffab24bee374476"} Jan 30 22:51:25 crc kubenswrapper[4751]: I0130 22:51:25.446720 4751 generic.go:334] "Generic (PLEG): container finished" podID="fcac30a6-10b3-43ee-8e8e-8d2514b3f237" containerID="d397daf1f9cb036fa054989b8a679919449dcee523e6dc6bdffab24bee374476" exitCode=0 Jan 30 22:51:25 crc kubenswrapper[4751]: I0130 22:51:25.446770 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fdrt4" event={"ID":"fcac30a6-10b3-43ee-8e8e-8d2514b3f237","Type":"ContainerDied","Data":"d397daf1f9cb036fa054989b8a679919449dcee523e6dc6bdffab24bee374476"} Jan 30 22:51:26 crc kubenswrapper[4751]: I0130 22:51:26.460045 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fdrt4" event={"ID":"fcac30a6-10b3-43ee-8e8e-8d2514b3f237","Type":"ContainerStarted","Data":"f3c934d93a7f0e920df5360eb84a406c65cb131340a5736aae985c8ef32524c1"} Jan 30 22:51:26 crc kubenswrapper[4751]: I0130 22:51:26.481422 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fdrt4" podStartSLOduration=3.0513860250000002 podStartE2EDuration="6.481401127s" podCreationTimestamp="2026-01-30 22:51:20 +0000 UTC" firstStartedPulling="2026-01-30 22:51:22.408858192 +0000 UTC m=+5821.154680851" lastFinishedPulling="2026-01-30 22:51:25.838873294 +0000 UTC m=+5824.584695953" observedRunningTime="2026-01-30 22:51:26.478913648 +0000 UTC m=+5825.224736297" watchObservedRunningTime="2026-01-30 22:51:26.481401127 +0000 UTC m=+5825.227223776" Jan 30 22:51:29 crc kubenswrapper[4751]: I0130 22:51:29.977000 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:51:29 crc kubenswrapper[4751]: E0130 22:51:29.978013 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:51:30 crc kubenswrapper[4751]: I0130 22:51:30.984874 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:30 crc kubenswrapper[4751]: I0130 22:51:30.985126 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:31 crc kubenswrapper[4751]: I0130 22:51:31.044701 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:31 crc kubenswrapper[4751]: I0130 22:51:31.594253 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:31 crc kubenswrapper[4751]: I0130 22:51:31.663755 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fdrt4"] Jan 30 22:51:32 crc kubenswrapper[4751]: I0130 22:51:32.782671 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="d55cd7e5-6799-4e1a-9f3b-a92937aca796" containerName="galera" probeResult="failure" output="command timed out" Jan 30 22:51:33 crc kubenswrapper[4751]: I0130 22:51:33.541033 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fdrt4" podUID="fcac30a6-10b3-43ee-8e8e-8d2514b3f237" containerName="registry-server" containerID="cri-o://f3c934d93a7f0e920df5360eb84a406c65cb131340a5736aae985c8ef32524c1" gracePeriod=2 Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.037537 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.163996 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-utilities\") pod \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\" (UID: \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\") " Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.164346 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-catalog-content\") pod \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\" (UID: \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\") " Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.164480 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9svz\" (UniqueName: \"kubernetes.io/projected/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-kube-api-access-w9svz\") pod \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\" (UID: \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\") " Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.165098 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-utilities" (OuterVolumeSpecName: "utilities") pod "fcac30a6-10b3-43ee-8e8e-8d2514b3f237" (UID: "fcac30a6-10b3-43ee-8e8e-8d2514b3f237"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.166927 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.173431 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-kube-api-access-w9svz" (OuterVolumeSpecName: "kube-api-access-w9svz") pod "fcac30a6-10b3-43ee-8e8e-8d2514b3f237" (UID: "fcac30a6-10b3-43ee-8e8e-8d2514b3f237"). InnerVolumeSpecName "kube-api-access-w9svz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.227932 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fcac30a6-10b3-43ee-8e8e-8d2514b3f237" (UID: "fcac30a6-10b3-43ee-8e8e-8d2514b3f237"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.269970 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.270005 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9svz\" (UniqueName: \"kubernetes.io/projected/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-kube-api-access-w9svz\") on node \"crc\" DevicePath \"\"" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.556027 4751 generic.go:334] "Generic (PLEG): container finished" podID="fcac30a6-10b3-43ee-8e8e-8d2514b3f237" containerID="f3c934d93a7f0e920df5360eb84a406c65cb131340a5736aae985c8ef32524c1" exitCode=0 Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.556098 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fdrt4" event={"ID":"fcac30a6-10b3-43ee-8e8e-8d2514b3f237","Type":"ContainerDied","Data":"f3c934d93a7f0e920df5360eb84a406c65cb131340a5736aae985c8ef32524c1"} Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.556153 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fdrt4" event={"ID":"fcac30a6-10b3-43ee-8e8e-8d2514b3f237","Type":"ContainerDied","Data":"ff664fa803d6f8df0cb6e9976be9f069d58f53ff54f7ff683dbcf4a7928d6602"} Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.556173 4751 scope.go:117] "RemoveContainer" containerID="f3c934d93a7f0e920df5360eb84a406c65cb131340a5736aae985c8ef32524c1" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.556201 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.587369 4751 scope.go:117] "RemoveContainer" containerID="d397daf1f9cb036fa054989b8a679919449dcee523e6dc6bdffab24bee374476" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.624477 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fdrt4"] Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.636819 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fdrt4"] Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.797260 4751 scope.go:117] "RemoveContainer" containerID="d72545f8b3044ed4e0017422a6fae9672f2b0ebd610e6b3b65568b73a0ace171" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.849041 4751 scope.go:117] "RemoveContainer" containerID="f3c934d93a7f0e920df5360eb84a406c65cb131340a5736aae985c8ef32524c1" Jan 30 22:51:34 crc kubenswrapper[4751]: E0130 22:51:34.849585 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3c934d93a7f0e920df5360eb84a406c65cb131340a5736aae985c8ef32524c1\": container with ID starting with f3c934d93a7f0e920df5360eb84a406c65cb131340a5736aae985c8ef32524c1 not found: ID does not exist" containerID="f3c934d93a7f0e920df5360eb84a406c65cb131340a5736aae985c8ef32524c1" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.849616 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3c934d93a7f0e920df5360eb84a406c65cb131340a5736aae985c8ef32524c1"} err="failed to get container status \"f3c934d93a7f0e920df5360eb84a406c65cb131340a5736aae985c8ef32524c1\": rpc error: code = NotFound desc = could not find container \"f3c934d93a7f0e920df5360eb84a406c65cb131340a5736aae985c8ef32524c1\": container with ID starting with f3c934d93a7f0e920df5360eb84a406c65cb131340a5736aae985c8ef32524c1 not found: ID does not exist" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.849637 4751 scope.go:117] "RemoveContainer" containerID="d397daf1f9cb036fa054989b8a679919449dcee523e6dc6bdffab24bee374476" Jan 30 22:51:34 crc kubenswrapper[4751]: E0130 22:51:34.849901 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d397daf1f9cb036fa054989b8a679919449dcee523e6dc6bdffab24bee374476\": container with ID starting with d397daf1f9cb036fa054989b8a679919449dcee523e6dc6bdffab24bee374476 not found: ID does not exist" containerID="d397daf1f9cb036fa054989b8a679919449dcee523e6dc6bdffab24bee374476" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.849926 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d397daf1f9cb036fa054989b8a679919449dcee523e6dc6bdffab24bee374476"} err="failed to get container status \"d397daf1f9cb036fa054989b8a679919449dcee523e6dc6bdffab24bee374476\": rpc error: code = NotFound desc = could not find container \"d397daf1f9cb036fa054989b8a679919449dcee523e6dc6bdffab24bee374476\": container with ID starting with d397daf1f9cb036fa054989b8a679919449dcee523e6dc6bdffab24bee374476 not found: ID does not exist" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.849940 4751 scope.go:117] "RemoveContainer" containerID="d72545f8b3044ed4e0017422a6fae9672f2b0ebd610e6b3b65568b73a0ace171" Jan 30 22:51:34 crc kubenswrapper[4751]: E0130 22:51:34.850151 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d72545f8b3044ed4e0017422a6fae9672f2b0ebd610e6b3b65568b73a0ace171\": container with ID starting with d72545f8b3044ed4e0017422a6fae9672f2b0ebd610e6b3b65568b73a0ace171 not found: ID does not exist" containerID="d72545f8b3044ed4e0017422a6fae9672f2b0ebd610e6b3b65568b73a0ace171" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.850183 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d72545f8b3044ed4e0017422a6fae9672f2b0ebd610e6b3b65568b73a0ace171"} err="failed to get container status \"d72545f8b3044ed4e0017422a6fae9672f2b0ebd610e6b3b65568b73a0ace171\": rpc error: code = NotFound desc = could not find container \"d72545f8b3044ed4e0017422a6fae9672f2b0ebd610e6b3b65568b73a0ace171\": container with ID starting with d72545f8b3044ed4e0017422a6fae9672f2b0ebd610e6b3b65568b73a0ace171 not found: ID does not exist" Jan 30 22:51:35 crc kubenswrapper[4751]: I0130 22:51:35.992296 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcac30a6-10b3-43ee-8e8e-8d2514b3f237" path="/var/lib/kubelet/pods/fcac30a6-10b3-43ee-8e8e-8d2514b3f237/volumes" Jan 30 22:51:42 crc kubenswrapper[4751]: I0130 22:51:42.976118 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:51:42 crc kubenswrapper[4751]: E0130 22:51:42.977202 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:51:56 crc kubenswrapper[4751]: I0130 22:51:56.976793 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:51:56 crc kubenswrapper[4751]: E0130 22:51:56.977877 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:52:08 crc kubenswrapper[4751]: I0130 22:52:08.977349 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:52:08 crc kubenswrapper[4751]: E0130 22:52:08.978458 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:52:23 crc kubenswrapper[4751]: I0130 22:52:23.976366 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:52:23 crc kubenswrapper[4751]: E0130 22:52:23.976919 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:52:35 crc kubenswrapper[4751]: I0130 22:52:35.976500 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:52:35 crc kubenswrapper[4751]: E0130 22:52:35.977508 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:52:46 crc kubenswrapper[4751]: I0130 22:52:46.976042 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:52:46 crc kubenswrapper[4751]: E0130 22:52:46.976952 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:52:58 crc kubenswrapper[4751]: I0130 22:52:58.976600 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:52:58 crc kubenswrapper[4751]: E0130 22:52:58.977541 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:53:13 crc kubenswrapper[4751]: I0130 22:53:13.976441 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:53:13 crc kubenswrapper[4751]: E0130 22:53:13.977268 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:53:25 crc kubenswrapper[4751]: I0130 22:53:25.977933 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:53:26 crc kubenswrapper[4751]: I0130 22:53:26.353557 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"1f994498b8705c718253f1d686dfa142a31e491cf05bc7e00a9d3f4b2c57ea67"} Jan 30 22:55:09 crc kubenswrapper[4751]: I0130 22:55:09.597048 4751 generic.go:334] "Generic (PLEG): container finished" podID="053bddc4-b1a1-4951-af33-6230acd3ee0b" containerID="4a57649ebddefdd6cfb7979e8b07856c36ff49932c8103c4cfd06fb309f09454" exitCode=0 Jan 30 22:55:09 crc kubenswrapper[4751]: I0130 22:55:09.597165 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"053bddc4-b1a1-4951-af33-6230acd3ee0b","Type":"ContainerDied","Data":"4a57649ebddefdd6cfb7979e8b07856c36ff49932c8103c4cfd06fb309f09454"} Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.080160 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.142471 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-openstack-config-secret\") pod \"053bddc4-b1a1-4951-af33-6230acd3ee0b\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.142547 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/053bddc4-b1a1-4951-af33-6230acd3ee0b-test-operator-ephemeral-temporary\") pod \"053bddc4-b1a1-4951-af33-6230acd3ee0b\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.142661 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-ssh-key\") pod \"053bddc4-b1a1-4951-af33-6230acd3ee0b\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.142694 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/053bddc4-b1a1-4951-af33-6230acd3ee0b-openstack-config\") pod \"053bddc4-b1a1-4951-af33-6230acd3ee0b\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.142718 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-ca-certs\") pod \"053bddc4-b1a1-4951-af33-6230acd3ee0b\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.142783 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"053bddc4-b1a1-4951-af33-6230acd3ee0b\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.142889 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnm9k\" (UniqueName: \"kubernetes.io/projected/053bddc4-b1a1-4951-af33-6230acd3ee0b-kube-api-access-cnm9k\") pod \"053bddc4-b1a1-4951-af33-6230acd3ee0b\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.143097 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/053bddc4-b1a1-4951-af33-6230acd3ee0b-config-data\") pod \"053bddc4-b1a1-4951-af33-6230acd3ee0b\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.143191 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/053bddc4-b1a1-4951-af33-6230acd3ee0b-test-operator-ephemeral-workdir\") pod \"053bddc4-b1a1-4951-af33-6230acd3ee0b\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.145615 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/053bddc4-b1a1-4951-af33-6230acd3ee0b-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "053bddc4-b1a1-4951-af33-6230acd3ee0b" (UID: "053bddc4-b1a1-4951-af33-6230acd3ee0b"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.147192 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/053bddc4-b1a1-4951-af33-6230acd3ee0b-config-data" (OuterVolumeSpecName: "config-data") pod "053bddc4-b1a1-4951-af33-6230acd3ee0b" (UID: "053bddc4-b1a1-4951-af33-6230acd3ee0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.150616 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "test-operator-logs") pod "053bddc4-b1a1-4951-af33-6230acd3ee0b" (UID: "053bddc4-b1a1-4951-af33-6230acd3ee0b"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.154360 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/053bddc4-b1a1-4951-af33-6230acd3ee0b-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "053bddc4-b1a1-4951-af33-6230acd3ee0b" (UID: "053bddc4-b1a1-4951-af33-6230acd3ee0b"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.158636 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/053bddc4-b1a1-4951-af33-6230acd3ee0b-kube-api-access-cnm9k" (OuterVolumeSpecName: "kube-api-access-cnm9k") pod "053bddc4-b1a1-4951-af33-6230acd3ee0b" (UID: "053bddc4-b1a1-4951-af33-6230acd3ee0b"). InnerVolumeSpecName "kube-api-access-cnm9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.186548 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "053bddc4-b1a1-4951-af33-6230acd3ee0b" (UID: "053bddc4-b1a1-4951-af33-6230acd3ee0b"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.187510 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "053bddc4-b1a1-4951-af33-6230acd3ee0b" (UID: "053bddc4-b1a1-4951-af33-6230acd3ee0b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.195060 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "053bddc4-b1a1-4951-af33-6230acd3ee0b" (UID: "053bddc4-b1a1-4951-af33-6230acd3ee0b"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.215067 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/053bddc4-b1a1-4951-af33-6230acd3ee0b-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "053bddc4-b1a1-4951-af33-6230acd3ee0b" (UID: "053bddc4-b1a1-4951-af33-6230acd3ee0b"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.257918 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/053bddc4-b1a1-4951-af33-6230acd3ee0b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.257972 4751 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/053bddc4-b1a1-4951-af33-6230acd3ee0b-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.257989 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.258011 4751 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/053bddc4-b1a1-4951-af33-6230acd3ee0b-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.258032 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.258044 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/053bddc4-b1a1-4951-af33-6230acd3ee0b-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.258059 4751 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.269070 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.269106 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnm9k\" (UniqueName: \"kubernetes.io/projected/053bddc4-b1a1-4951-af33-6230acd3ee0b-kube-api-access-cnm9k\") on node \"crc\" DevicePath \"\"" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.299442 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.371693 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.619829 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"053bddc4-b1a1-4951-af33-6230acd3ee0b","Type":"ContainerDied","Data":"01b3d137ed8bb5af449d591205e958b031f5ad78d5d86311bd69b7e07f52d896"} Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.619871 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01b3d137ed8bb5af449d591205e958b031f5ad78d5d86311bd69b7e07f52d896" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.619943 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.508002 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 22:55:14 crc kubenswrapper[4751]: E0130 22:55:14.509514 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="053bddc4-b1a1-4951-af33-6230acd3ee0b" containerName="tempest-tests-tempest-tests-runner" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.509541 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="053bddc4-b1a1-4951-af33-6230acd3ee0b" containerName="tempest-tests-tempest-tests-runner" Jan 30 22:55:14 crc kubenswrapper[4751]: E0130 22:55:14.509584 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcac30a6-10b3-43ee-8e8e-8d2514b3f237" containerName="extract-utilities" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.509596 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcac30a6-10b3-43ee-8e8e-8d2514b3f237" containerName="extract-utilities" Jan 30 22:55:14 crc kubenswrapper[4751]: E0130 22:55:14.509622 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcac30a6-10b3-43ee-8e8e-8d2514b3f237" containerName="extract-content" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.509637 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcac30a6-10b3-43ee-8e8e-8d2514b3f237" containerName="extract-content" Jan 30 22:55:14 crc kubenswrapper[4751]: E0130 22:55:14.509668 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcac30a6-10b3-43ee-8e8e-8d2514b3f237" containerName="registry-server" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.509681 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcac30a6-10b3-43ee-8e8e-8d2514b3f237" containerName="registry-server" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.510067 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcac30a6-10b3-43ee-8e8e-8d2514b3f237" containerName="registry-server" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.510090 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="053bddc4-b1a1-4951-af33-6230acd3ee0b" containerName="tempest-tests-tempest-tests-runner" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.511581 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.515254 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-dvc9j" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.520521 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.650271 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjcjr\" (UniqueName: \"kubernetes.io/projected/3555a827-6ba2-4057-a142-ea2818a3d76e-kube-api-access-mjcjr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3555a827-6ba2-4057-a142-ea2818a3d76e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.650806 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3555a827-6ba2-4057-a142-ea2818a3d76e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.752810 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3555a827-6ba2-4057-a142-ea2818a3d76e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.752955 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjcjr\" (UniqueName: \"kubernetes.io/projected/3555a827-6ba2-4057-a142-ea2818a3d76e-kube-api-access-mjcjr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3555a827-6ba2-4057-a142-ea2818a3d76e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.755682 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3555a827-6ba2-4057-a142-ea2818a3d76e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.779749 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjcjr\" (UniqueName: \"kubernetes.io/projected/3555a827-6ba2-4057-a142-ea2818a3d76e-kube-api-access-mjcjr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3555a827-6ba2-4057-a142-ea2818a3d76e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.784861 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3555a827-6ba2-4057-a142-ea2818a3d76e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.845008 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 22:55:15 crc kubenswrapper[4751]: I0130 22:55:15.334907 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 22:55:15 crc kubenswrapper[4751]: I0130 22:55:15.667488 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"3555a827-6ba2-4057-a142-ea2818a3d76e","Type":"ContainerStarted","Data":"46b68f4efa8666c51d5131098e4b99383c0297af3a88d6037817cab6411c9901"} Jan 30 22:55:17 crc kubenswrapper[4751]: I0130 22:55:17.695891 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"3555a827-6ba2-4057-a142-ea2818a3d76e","Type":"ContainerStarted","Data":"bc343ed706a565bd0701fc90656578995b227aca5ffd5039bc97a9ed067084cd"} Jan 30 22:55:17 crc kubenswrapper[4751]: I0130 22:55:17.723657 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.2380563860000002 podStartE2EDuration="3.723634345s" podCreationTimestamp="2026-01-30 22:55:14 +0000 UTC" firstStartedPulling="2026-01-30 22:55:15.340525048 +0000 UTC m=+6054.086347697" lastFinishedPulling="2026-01-30 22:55:16.826103007 +0000 UTC m=+6055.571925656" observedRunningTime="2026-01-30 22:55:17.708640872 +0000 UTC m=+6056.454463531" watchObservedRunningTime="2026-01-30 22:55:17.723634345 +0000 UTC m=+6056.469457004" Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.039267 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2lghv"] Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.046690 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.060600 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2lghv"] Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.205633 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-utilities\") pod \"redhat-operators-2lghv\" (UID: \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\") " pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.205936 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72lpm\" (UniqueName: \"kubernetes.io/projected/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-kube-api-access-72lpm\") pod \"redhat-operators-2lghv\" (UID: \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\") " pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.206136 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-catalog-content\") pod \"redhat-operators-2lghv\" (UID: \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\") " pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.307955 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-utilities\") pod \"redhat-operators-2lghv\" (UID: \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\") " pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.308012 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72lpm\" (UniqueName: \"kubernetes.io/projected/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-kube-api-access-72lpm\") pod \"redhat-operators-2lghv\" (UID: \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\") " pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.308107 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-catalog-content\") pod \"redhat-operators-2lghv\" (UID: \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\") " pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.308855 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-catalog-content\") pod \"redhat-operators-2lghv\" (UID: \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\") " pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.308900 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-utilities\") pod \"redhat-operators-2lghv\" (UID: \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\") " pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.330624 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72lpm\" (UniqueName: \"kubernetes.io/projected/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-kube-api-access-72lpm\") pod \"redhat-operators-2lghv\" (UID: \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\") " pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.378204 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.881666 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2lghv"] Jan 30 22:55:44 crc kubenswrapper[4751]: I0130 22:55:44.057586 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lghv" event={"ID":"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c","Type":"ContainerStarted","Data":"d450517e757eb66afa3e8e00e500666c0b3dc5e030c10dd122a698c77bf8e56d"} Jan 30 22:55:45 crc kubenswrapper[4751]: I0130 22:55:45.070111 4751 generic.go:334] "Generic (PLEG): container finished" podID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerID="2ec7029e3614aaaf9779f2863efc1aa2a974de2154be7eae73a1e2c993e44e4f" exitCode=0 Jan 30 22:55:45 crc kubenswrapper[4751]: I0130 22:55:45.070176 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lghv" event={"ID":"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c","Type":"ContainerDied","Data":"2ec7029e3614aaaf9779f2863efc1aa2a974de2154be7eae73a1e2c993e44e4f"} Jan 30 22:55:46 crc kubenswrapper[4751]: I0130 22:55:46.084529 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lghv" event={"ID":"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c","Type":"ContainerStarted","Data":"0ddc7523ae323025c6035da055db9c099b9b434c59df935923f0eeb866d35155"} Jan 30 22:55:51 crc kubenswrapper[4751]: I0130 22:55:51.150469 4751 generic.go:334] "Generic (PLEG): container finished" podID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerID="0ddc7523ae323025c6035da055db9c099b9b434c59df935923f0eeb866d35155" exitCode=0 Jan 30 22:55:51 crc kubenswrapper[4751]: I0130 22:55:51.150551 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lghv" event={"ID":"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c","Type":"ContainerDied","Data":"0ddc7523ae323025c6035da055db9c099b9b434c59df935923f0eeb866d35155"} Jan 30 22:55:51 crc kubenswrapper[4751]: I0130 22:55:51.939768 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-89qbh/must-gather-xtff4"] Jan 30 22:55:51 crc kubenswrapper[4751]: I0130 22:55:51.942165 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/must-gather-xtff4" Jan 30 22:55:51 crc kubenswrapper[4751]: I0130 22:55:51.945036 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-89qbh"/"openshift-service-ca.crt" Jan 30 22:55:51 crc kubenswrapper[4751]: I0130 22:55:51.953384 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-89qbh"/"kube-root-ca.crt" Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.048161 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-89qbh/must-gather-xtff4"] Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.058127 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d2qv\" (UniqueName: \"kubernetes.io/projected/bc2d69f7-78aa-4618-a287-008258e34b47-kube-api-access-5d2qv\") pod \"must-gather-xtff4\" (UID: \"bc2d69f7-78aa-4618-a287-008258e34b47\") " pod="openshift-must-gather-89qbh/must-gather-xtff4" Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.058293 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bc2d69f7-78aa-4618-a287-008258e34b47-must-gather-output\") pod \"must-gather-xtff4\" (UID: \"bc2d69f7-78aa-4618-a287-008258e34b47\") " pod="openshift-must-gather-89qbh/must-gather-xtff4" Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.161279 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bc2d69f7-78aa-4618-a287-008258e34b47-must-gather-output\") pod \"must-gather-xtff4\" (UID: \"bc2d69f7-78aa-4618-a287-008258e34b47\") " pod="openshift-must-gather-89qbh/must-gather-xtff4" Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.161647 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bc2d69f7-78aa-4618-a287-008258e34b47-must-gather-output\") pod \"must-gather-xtff4\" (UID: \"bc2d69f7-78aa-4618-a287-008258e34b47\") " pod="openshift-must-gather-89qbh/must-gather-xtff4" Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.161917 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d2qv\" (UniqueName: \"kubernetes.io/projected/bc2d69f7-78aa-4618-a287-008258e34b47-kube-api-access-5d2qv\") pod \"must-gather-xtff4\" (UID: \"bc2d69f7-78aa-4618-a287-008258e34b47\") " pod="openshift-must-gather-89qbh/must-gather-xtff4" Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.165959 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lghv" event={"ID":"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c","Type":"ContainerStarted","Data":"af1ce8f85915397c417bab40a5f0c6d4ea6cb5f742bba643f5ba76786c50fc05"} Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.192065 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d2qv\" (UniqueName: \"kubernetes.io/projected/bc2d69f7-78aa-4618-a287-008258e34b47-kube-api-access-5d2qv\") pod \"must-gather-xtff4\" (UID: \"bc2d69f7-78aa-4618-a287-008258e34b47\") " pod="openshift-must-gather-89qbh/must-gather-xtff4" Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.202136 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2lghv" podStartSLOduration=3.681503996 podStartE2EDuration="10.202114596s" podCreationTimestamp="2026-01-30 22:55:42 +0000 UTC" firstStartedPulling="2026-01-30 22:55:45.073057797 +0000 UTC m=+6083.818880446" lastFinishedPulling="2026-01-30 22:55:51.593668397 +0000 UTC m=+6090.339491046" observedRunningTime="2026-01-30 22:55:52.184539886 +0000 UTC m=+6090.930362545" watchObservedRunningTime="2026-01-30 22:55:52.202114596 +0000 UTC m=+6090.947937245" Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.272869 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/must-gather-xtff4" Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.871510 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z9d6t"] Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.874581 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.885373 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9d6t"] Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.972182 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-89qbh/must-gather-xtff4"] Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.980403 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6wxk\" (UniqueName: \"kubernetes.io/projected/822b4327-52bb-4f05-a391-3afff2cfe815-kube-api-access-w6wxk\") pod \"redhat-marketplace-z9d6t\" (UID: \"822b4327-52bb-4f05-a391-3afff2cfe815\") " pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.980544 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822b4327-52bb-4f05-a391-3afff2cfe815-utilities\") pod \"redhat-marketplace-z9d6t\" (UID: \"822b4327-52bb-4f05-a391-3afff2cfe815\") " pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.980595 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822b4327-52bb-4f05-a391-3afff2cfe815-catalog-content\") pod \"redhat-marketplace-z9d6t\" (UID: \"822b4327-52bb-4f05-a391-3afff2cfe815\") " pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:55:52 crc kubenswrapper[4751]: W0130 22:55:52.990358 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc2d69f7_78aa_4618_a287_008258e34b47.slice/crio-575b4d3eb8cef83290428e5d8b415ca61275473bf7a1f9dc15448f290fbb2549 WatchSource:0}: Error finding container 575b4d3eb8cef83290428e5d8b415ca61275473bf7a1f9dc15448f290fbb2549: Status 404 returned error can't find the container with id 575b4d3eb8cef83290428e5d8b415ca61275473bf7a1f9dc15448f290fbb2549 Jan 30 22:55:53 crc kubenswrapper[4751]: I0130 22:55:53.083318 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6wxk\" (UniqueName: \"kubernetes.io/projected/822b4327-52bb-4f05-a391-3afff2cfe815-kube-api-access-w6wxk\") pod \"redhat-marketplace-z9d6t\" (UID: \"822b4327-52bb-4f05-a391-3afff2cfe815\") " pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:55:53 crc kubenswrapper[4751]: I0130 22:55:53.083954 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822b4327-52bb-4f05-a391-3afff2cfe815-utilities\") pod \"redhat-marketplace-z9d6t\" (UID: \"822b4327-52bb-4f05-a391-3afff2cfe815\") " pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:55:53 crc kubenswrapper[4751]: I0130 22:55:53.084048 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822b4327-52bb-4f05-a391-3afff2cfe815-catalog-content\") pod \"redhat-marketplace-z9d6t\" (UID: \"822b4327-52bb-4f05-a391-3afff2cfe815\") " pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:55:53 crc kubenswrapper[4751]: I0130 22:55:53.084601 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822b4327-52bb-4f05-a391-3afff2cfe815-utilities\") pod \"redhat-marketplace-z9d6t\" (UID: \"822b4327-52bb-4f05-a391-3afff2cfe815\") " pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:55:53 crc kubenswrapper[4751]: I0130 22:55:53.084646 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822b4327-52bb-4f05-a391-3afff2cfe815-catalog-content\") pod \"redhat-marketplace-z9d6t\" (UID: \"822b4327-52bb-4f05-a391-3afff2cfe815\") " pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:55:53 crc kubenswrapper[4751]: I0130 22:55:53.112877 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6wxk\" (UniqueName: \"kubernetes.io/projected/822b4327-52bb-4f05-a391-3afff2cfe815-kube-api-access-w6wxk\") pod \"redhat-marketplace-z9d6t\" (UID: \"822b4327-52bb-4f05-a391-3afff2cfe815\") " pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:55:53 crc kubenswrapper[4751]: I0130 22:55:53.190818 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-89qbh/must-gather-xtff4" event={"ID":"bc2d69f7-78aa-4618-a287-008258e34b47","Type":"ContainerStarted","Data":"575b4d3eb8cef83290428e5d8b415ca61275473bf7a1f9dc15448f290fbb2549"} Jan 30 22:55:53 crc kubenswrapper[4751]: I0130 22:55:53.202530 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:55:53 crc kubenswrapper[4751]: I0130 22:55:53.381029 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:55:53 crc kubenswrapper[4751]: I0130 22:55:53.381533 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:55:53 crc kubenswrapper[4751]: I0130 22:55:53.718624 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9d6t"] Jan 30 22:55:54 crc kubenswrapper[4751]: I0130 22:55:54.126938 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:55:54 crc kubenswrapper[4751]: I0130 22:55:54.127323 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:55:54 crc kubenswrapper[4751]: I0130 22:55:54.239748 4751 generic.go:334] "Generic (PLEG): container finished" podID="822b4327-52bb-4f05-a391-3afff2cfe815" containerID="f2ee661fb62008f3e20783cee091203d1a7edba8ecec73406742c81805479ab7" exitCode=0 Jan 30 22:55:54 crc kubenswrapper[4751]: I0130 22:55:54.241407 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9d6t" event={"ID":"822b4327-52bb-4f05-a391-3afff2cfe815","Type":"ContainerDied","Data":"f2ee661fb62008f3e20783cee091203d1a7edba8ecec73406742c81805479ab7"} Jan 30 22:55:54 crc kubenswrapper[4751]: I0130 22:55:54.241458 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9d6t" event={"ID":"822b4327-52bb-4f05-a391-3afff2cfe815","Type":"ContainerStarted","Data":"657149d18faffd7be299b21cac1edbfbf6c55b1e66a8d6b785bd39e1fe11d816"} Jan 30 22:55:54 crc kubenswrapper[4751]: I0130 22:55:54.441133 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2lghv" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerName="registry-server" probeResult="failure" output=< Jan 30 22:55:54 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:55:54 crc kubenswrapper[4751]: > Jan 30 22:55:55 crc kubenswrapper[4751]: I0130 22:55:55.255737 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9d6t" event={"ID":"822b4327-52bb-4f05-a391-3afff2cfe815","Type":"ContainerStarted","Data":"4381b59162725a7444c1506a2669a6cdd0777d370e2c8d19b2767ff2dc05806f"} Jan 30 22:55:57 crc kubenswrapper[4751]: I0130 22:55:57.281644 4751 generic.go:334] "Generic (PLEG): container finished" podID="822b4327-52bb-4f05-a391-3afff2cfe815" containerID="4381b59162725a7444c1506a2669a6cdd0777d370e2c8d19b2767ff2dc05806f" exitCode=0 Jan 30 22:55:57 crc kubenswrapper[4751]: I0130 22:55:57.281841 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9d6t" event={"ID":"822b4327-52bb-4f05-a391-3afff2cfe815","Type":"ContainerDied","Data":"4381b59162725a7444c1506a2669a6cdd0777d370e2c8d19b2767ff2dc05806f"} Jan 30 22:55:58 crc kubenswrapper[4751]: I0130 22:55:58.300817 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-89qbh/must-gather-xtff4" event={"ID":"bc2d69f7-78aa-4618-a287-008258e34b47","Type":"ContainerStarted","Data":"d98a640d38d1a0008a9787079a3ae73e9ed1113f5304435185bdeae2c0722cd9"} Jan 30 22:55:59 crc kubenswrapper[4751]: I0130 22:55:59.312933 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9d6t" event={"ID":"822b4327-52bb-4f05-a391-3afff2cfe815","Type":"ContainerStarted","Data":"7dfdf62a347d67bf4af34a06215c821cdbe98fcbb138c7e84242629f3eca78c2"} Jan 30 22:55:59 crc kubenswrapper[4751]: I0130 22:55:59.315160 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-89qbh/must-gather-xtff4" event={"ID":"bc2d69f7-78aa-4618-a287-008258e34b47","Type":"ContainerStarted","Data":"f02334ba80bf205d6fb3fa9e2fb257541f03ce6ab3c97fbdf7d1d6f815819596"} Jan 30 22:55:59 crc kubenswrapper[4751]: I0130 22:55:59.344884 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z9d6t" podStartSLOduration=3.334271574 podStartE2EDuration="7.34485499s" podCreationTimestamp="2026-01-30 22:55:52 +0000 UTC" firstStartedPulling="2026-01-30 22:55:54.249523255 +0000 UTC m=+6092.995345904" lastFinishedPulling="2026-01-30 22:55:58.260106671 +0000 UTC m=+6097.005929320" observedRunningTime="2026-01-30 22:55:59.338094995 +0000 UTC m=+6098.083917664" watchObservedRunningTime="2026-01-30 22:55:59.34485499 +0000 UTC m=+6098.090677649" Jan 30 22:55:59 crc kubenswrapper[4751]: I0130 22:55:59.360655 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-89qbh/must-gather-xtff4" podStartSLOduration=3.510704043 podStartE2EDuration="8.360636971s" podCreationTimestamp="2026-01-30 22:55:51 +0000 UTC" firstStartedPulling="2026-01-30 22:55:52.999024657 +0000 UTC m=+6091.744847306" lastFinishedPulling="2026-01-30 22:55:57.848957585 +0000 UTC m=+6096.594780234" observedRunningTime="2026-01-30 22:55:59.354911054 +0000 UTC m=+6098.100733703" watchObservedRunningTime="2026-01-30 22:55:59.360636971 +0000 UTC m=+6098.106459620" Jan 30 22:56:03 crc kubenswrapper[4751]: I0130 22:56:03.204736 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:56:03 crc kubenswrapper[4751]: I0130 22:56:03.205216 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:56:04 crc kubenswrapper[4751]: I0130 22:56:04.260866 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-z9d6t" podUID="822b4327-52bb-4f05-a391-3afff2cfe815" containerName="registry-server" probeResult="failure" output=< Jan 30 22:56:04 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:56:04 crc kubenswrapper[4751]: > Jan 30 22:56:04 crc kubenswrapper[4751]: I0130 22:56:04.442639 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2lghv" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerName="registry-server" probeResult="failure" output=< Jan 30 22:56:04 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:56:04 crc kubenswrapper[4751]: > Jan 30 22:56:04 crc kubenswrapper[4751]: I0130 22:56:04.589371 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-89qbh/crc-debug-q2xqd"] Jan 30 22:56:04 crc kubenswrapper[4751]: I0130 22:56:04.591176 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/crc-debug-q2xqd" Jan 30 22:56:04 crc kubenswrapper[4751]: I0130 22:56:04.593930 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-89qbh"/"default-dockercfg-62sr4" Jan 30 22:56:04 crc kubenswrapper[4751]: I0130 22:56:04.686272 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwq9j\" (UniqueName: \"kubernetes.io/projected/01b77c61-26de-47b9-a360-961173e352c9-kube-api-access-rwq9j\") pod \"crc-debug-q2xqd\" (UID: \"01b77c61-26de-47b9-a360-961173e352c9\") " pod="openshift-must-gather-89qbh/crc-debug-q2xqd" Jan 30 22:56:04 crc kubenswrapper[4751]: I0130 22:56:04.687018 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01b77c61-26de-47b9-a360-961173e352c9-host\") pod \"crc-debug-q2xqd\" (UID: \"01b77c61-26de-47b9-a360-961173e352c9\") " pod="openshift-must-gather-89qbh/crc-debug-q2xqd" Jan 30 22:56:04 crc kubenswrapper[4751]: I0130 22:56:04.789167 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01b77c61-26de-47b9-a360-961173e352c9-host\") pod \"crc-debug-q2xqd\" (UID: \"01b77c61-26de-47b9-a360-961173e352c9\") " pod="openshift-must-gather-89qbh/crc-debug-q2xqd" Jan 30 22:56:04 crc kubenswrapper[4751]: I0130 22:56:04.789276 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwq9j\" (UniqueName: \"kubernetes.io/projected/01b77c61-26de-47b9-a360-961173e352c9-kube-api-access-rwq9j\") pod \"crc-debug-q2xqd\" (UID: \"01b77c61-26de-47b9-a360-961173e352c9\") " pod="openshift-must-gather-89qbh/crc-debug-q2xqd" Jan 30 22:56:04 crc kubenswrapper[4751]: I0130 22:56:04.791022 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01b77c61-26de-47b9-a360-961173e352c9-host\") pod \"crc-debug-q2xqd\" (UID: \"01b77c61-26de-47b9-a360-961173e352c9\") " pod="openshift-must-gather-89qbh/crc-debug-q2xqd" Jan 30 22:56:04 crc kubenswrapper[4751]: I0130 22:56:04.810998 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwq9j\" (UniqueName: \"kubernetes.io/projected/01b77c61-26de-47b9-a360-961173e352c9-kube-api-access-rwq9j\") pod \"crc-debug-q2xqd\" (UID: \"01b77c61-26de-47b9-a360-961173e352c9\") " pod="openshift-must-gather-89qbh/crc-debug-q2xqd" Jan 30 22:56:04 crc kubenswrapper[4751]: I0130 22:56:04.915753 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/crc-debug-q2xqd" Jan 30 22:56:04 crc kubenswrapper[4751]: W0130 22:56:04.958465 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01b77c61_26de_47b9_a360_961173e352c9.slice/crio-456800dcd9c3fd3c4100ffbf0b8217bc5765221ee9f1391a2fc60f7b32123671 WatchSource:0}: Error finding container 456800dcd9c3fd3c4100ffbf0b8217bc5765221ee9f1391a2fc60f7b32123671: Status 404 returned error can't find the container with id 456800dcd9c3fd3c4100ffbf0b8217bc5765221ee9f1391a2fc60f7b32123671 Jan 30 22:56:05 crc kubenswrapper[4751]: I0130 22:56:05.381453 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-89qbh/crc-debug-q2xqd" event={"ID":"01b77c61-26de-47b9-a360-961173e352c9","Type":"ContainerStarted","Data":"456800dcd9c3fd3c4100ffbf0b8217bc5765221ee9f1391a2fc60f7b32123671"} Jan 30 22:56:13 crc kubenswrapper[4751]: I0130 22:56:13.271350 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:56:13 crc kubenswrapper[4751]: I0130 22:56:13.351176 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:56:14 crc kubenswrapper[4751]: I0130 22:56:14.185509 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9d6t"] Jan 30 22:56:14 crc kubenswrapper[4751]: I0130 22:56:14.426945 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2lghv" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerName="registry-server" probeResult="failure" output=< Jan 30 22:56:14 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:56:14 crc kubenswrapper[4751]: > Jan 30 22:56:14 crc kubenswrapper[4751]: I0130 22:56:14.481863 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z9d6t" podUID="822b4327-52bb-4f05-a391-3afff2cfe815" containerName="registry-server" containerID="cri-o://7dfdf62a347d67bf4af34a06215c821cdbe98fcbb138c7e84242629f3eca78c2" gracePeriod=2 Jan 30 22:56:15 crc kubenswrapper[4751]: I0130 22:56:15.496198 4751 generic.go:334] "Generic (PLEG): container finished" podID="822b4327-52bb-4f05-a391-3afff2cfe815" containerID="7dfdf62a347d67bf4af34a06215c821cdbe98fcbb138c7e84242629f3eca78c2" exitCode=0 Jan 30 22:56:15 crc kubenswrapper[4751]: I0130 22:56:15.496252 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9d6t" event={"ID":"822b4327-52bb-4f05-a391-3afff2cfe815","Type":"ContainerDied","Data":"7dfdf62a347d67bf4af34a06215c821cdbe98fcbb138c7e84242629f3eca78c2"} Jan 30 22:56:17 crc kubenswrapper[4751]: I0130 22:56:17.917844 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:56:17 crc kubenswrapper[4751]: I0130 22:56:17.922029 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822b4327-52bb-4f05-a391-3afff2cfe815-utilities\") pod \"822b4327-52bb-4f05-a391-3afff2cfe815\" (UID: \"822b4327-52bb-4f05-a391-3afff2cfe815\") " Jan 30 22:56:17 crc kubenswrapper[4751]: I0130 22:56:17.922170 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6wxk\" (UniqueName: \"kubernetes.io/projected/822b4327-52bb-4f05-a391-3afff2cfe815-kube-api-access-w6wxk\") pod \"822b4327-52bb-4f05-a391-3afff2cfe815\" (UID: \"822b4327-52bb-4f05-a391-3afff2cfe815\") " Jan 30 22:56:17 crc kubenswrapper[4751]: I0130 22:56:17.922389 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822b4327-52bb-4f05-a391-3afff2cfe815-catalog-content\") pod \"822b4327-52bb-4f05-a391-3afff2cfe815\" (UID: \"822b4327-52bb-4f05-a391-3afff2cfe815\") " Jan 30 22:56:17 crc kubenswrapper[4751]: I0130 22:56:17.924079 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/822b4327-52bb-4f05-a391-3afff2cfe815-utilities" (OuterVolumeSpecName: "utilities") pod "822b4327-52bb-4f05-a391-3afff2cfe815" (UID: "822b4327-52bb-4f05-a391-3afff2cfe815"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:56:17 crc kubenswrapper[4751]: I0130 22:56:17.932286 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/822b4327-52bb-4f05-a391-3afff2cfe815-kube-api-access-w6wxk" (OuterVolumeSpecName: "kube-api-access-w6wxk") pod "822b4327-52bb-4f05-a391-3afff2cfe815" (UID: "822b4327-52bb-4f05-a391-3afff2cfe815"). InnerVolumeSpecName "kube-api-access-w6wxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:56:17 crc kubenswrapper[4751]: I0130 22:56:17.951232 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/822b4327-52bb-4f05-a391-3afff2cfe815-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "822b4327-52bb-4f05-a391-3afff2cfe815" (UID: "822b4327-52bb-4f05-a391-3afff2cfe815"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:56:18 crc kubenswrapper[4751]: I0130 22:56:18.025210 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822b4327-52bb-4f05-a391-3afff2cfe815-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:56:18 crc kubenswrapper[4751]: I0130 22:56:18.025250 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6wxk\" (UniqueName: \"kubernetes.io/projected/822b4327-52bb-4f05-a391-3afff2cfe815-kube-api-access-w6wxk\") on node \"crc\" DevicePath \"\"" Jan 30 22:56:18 crc kubenswrapper[4751]: I0130 22:56:18.025261 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822b4327-52bb-4f05-a391-3afff2cfe815-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:56:18 crc kubenswrapper[4751]: I0130 22:56:18.531292 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9d6t" event={"ID":"822b4327-52bb-4f05-a391-3afff2cfe815","Type":"ContainerDied","Data":"657149d18faffd7be299b21cac1edbfbf6c55b1e66a8d6b785bd39e1fe11d816"} Jan 30 22:56:18 crc kubenswrapper[4751]: I0130 22:56:18.531632 4751 scope.go:117] "RemoveContainer" containerID="7dfdf62a347d67bf4af34a06215c821cdbe98fcbb138c7e84242629f3eca78c2" Jan 30 22:56:18 crc kubenswrapper[4751]: I0130 22:56:18.531314 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:56:18 crc kubenswrapper[4751]: I0130 22:56:18.537301 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-89qbh/crc-debug-q2xqd" event={"ID":"01b77c61-26de-47b9-a360-961173e352c9","Type":"ContainerStarted","Data":"c96911f8f05d80b4e318ee559b561fbac28c80da80508a7352f791c9f10292c8"} Jan 30 22:56:18 crc kubenswrapper[4751]: I0130 22:56:18.565398 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9d6t"] Jan 30 22:56:18 crc kubenswrapper[4751]: I0130 22:56:18.582489 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9d6t"] Jan 30 22:56:18 crc kubenswrapper[4751]: I0130 22:56:18.594166 4751 scope.go:117] "RemoveContainer" containerID="4381b59162725a7444c1506a2669a6cdd0777d370e2c8d19b2767ff2dc05806f" Jan 30 22:56:18 crc kubenswrapper[4751]: I0130 22:56:18.595874 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-89qbh/crc-debug-q2xqd" podStartSLOduration=2.235963955 podStartE2EDuration="14.595860542s" podCreationTimestamp="2026-01-30 22:56:04 +0000 UTC" firstStartedPulling="2026-01-30 22:56:04.961024258 +0000 UTC m=+6103.706846907" lastFinishedPulling="2026-01-30 22:56:17.320920845 +0000 UTC m=+6116.066743494" observedRunningTime="2026-01-30 22:56:18.573495991 +0000 UTC m=+6117.319318640" watchObservedRunningTime="2026-01-30 22:56:18.595860542 +0000 UTC m=+6117.341683201" Jan 30 22:56:18 crc kubenswrapper[4751]: I0130 22:56:18.622632 4751 scope.go:117] "RemoveContainer" containerID="f2ee661fb62008f3e20783cee091203d1a7edba8ecec73406742c81805479ab7" Jan 30 22:56:19 crc kubenswrapper[4751]: I0130 22:56:19.989055 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="822b4327-52bb-4f05-a391-3afff2cfe815" path="/var/lib/kubelet/pods/822b4327-52bb-4f05-a391-3afff2cfe815/volumes" Jan 30 22:56:24 crc kubenswrapper[4751]: I0130 22:56:24.126479 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:56:24 crc kubenswrapper[4751]: I0130 22:56:24.127088 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:56:24 crc kubenswrapper[4751]: I0130 22:56:24.437078 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2lghv" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerName="registry-server" probeResult="failure" output=< Jan 30 22:56:24 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:56:24 crc kubenswrapper[4751]: > Jan 30 22:56:34 crc kubenswrapper[4751]: I0130 22:56:34.439096 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2lghv" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerName="registry-server" probeResult="failure" output=< Jan 30 22:56:34 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:56:34 crc kubenswrapper[4751]: > Jan 30 22:56:43 crc kubenswrapper[4751]: I0130 22:56:43.454251 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:56:43 crc kubenswrapper[4751]: I0130 22:56:43.523514 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:56:44 crc kubenswrapper[4751]: I0130 22:56:44.208138 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2lghv"] Jan 30 22:56:45 crc kubenswrapper[4751]: I0130 22:56:45.276818 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2lghv" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerName="registry-server" containerID="cri-o://af1ce8f85915397c417bab40a5f0c6d4ea6cb5f742bba643f5ba76786c50fc05" gracePeriod=2 Jan 30 22:56:45 crc kubenswrapper[4751]: I0130 22:56:45.894782 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.032524 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-catalog-content\") pod \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\" (UID: \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\") " Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.032804 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-utilities\") pod \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\" (UID: \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\") " Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.032892 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72lpm\" (UniqueName: \"kubernetes.io/projected/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-kube-api-access-72lpm\") pod \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\" (UID: \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\") " Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.033353 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-utilities" (OuterVolumeSpecName: "utilities") pod "3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" (UID: "3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.033945 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.043504 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-kube-api-access-72lpm" (OuterVolumeSpecName: "kube-api-access-72lpm") pod "3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" (UID: "3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c"). InnerVolumeSpecName "kube-api-access-72lpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.136493 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72lpm\" (UniqueName: \"kubernetes.io/projected/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-kube-api-access-72lpm\") on node \"crc\" DevicePath \"\"" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.147806 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" (UID: "3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.239497 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.290827 4751 generic.go:334] "Generic (PLEG): container finished" podID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerID="af1ce8f85915397c417bab40a5f0c6d4ea6cb5f742bba643f5ba76786c50fc05" exitCode=0 Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.290871 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lghv" event={"ID":"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c","Type":"ContainerDied","Data":"af1ce8f85915397c417bab40a5f0c6d4ea6cb5f742bba643f5ba76786c50fc05"} Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.290902 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lghv" event={"ID":"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c","Type":"ContainerDied","Data":"d450517e757eb66afa3e8e00e500666c0b3dc5e030c10dd122a698c77bf8e56d"} Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.290925 4751 scope.go:117] "RemoveContainer" containerID="af1ce8f85915397c417bab40a5f0c6d4ea6cb5f742bba643f5ba76786c50fc05" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.291086 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.334461 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2lghv"] Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.334431 4751 scope.go:117] "RemoveContainer" containerID="0ddc7523ae323025c6035da055db9c099b9b434c59df935923f0eeb866d35155" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.348588 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2lghv"] Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.358061 4751 scope.go:117] "RemoveContainer" containerID="2ec7029e3614aaaf9779f2863efc1aa2a974de2154be7eae73a1e2c993e44e4f" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.415498 4751 scope.go:117] "RemoveContainer" containerID="af1ce8f85915397c417bab40a5f0c6d4ea6cb5f742bba643f5ba76786c50fc05" Jan 30 22:56:46 crc kubenswrapper[4751]: E0130 22:56:46.416030 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af1ce8f85915397c417bab40a5f0c6d4ea6cb5f742bba643f5ba76786c50fc05\": container with ID starting with af1ce8f85915397c417bab40a5f0c6d4ea6cb5f742bba643f5ba76786c50fc05 not found: ID does not exist" containerID="af1ce8f85915397c417bab40a5f0c6d4ea6cb5f742bba643f5ba76786c50fc05" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.416092 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af1ce8f85915397c417bab40a5f0c6d4ea6cb5f742bba643f5ba76786c50fc05"} err="failed to get container status \"af1ce8f85915397c417bab40a5f0c6d4ea6cb5f742bba643f5ba76786c50fc05\": rpc error: code = NotFound desc = could not find container \"af1ce8f85915397c417bab40a5f0c6d4ea6cb5f742bba643f5ba76786c50fc05\": container with ID starting with af1ce8f85915397c417bab40a5f0c6d4ea6cb5f742bba643f5ba76786c50fc05 not found: ID does not exist" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.416215 4751 scope.go:117] "RemoveContainer" containerID="0ddc7523ae323025c6035da055db9c099b9b434c59df935923f0eeb866d35155" Jan 30 22:56:46 crc kubenswrapper[4751]: E0130 22:56:46.416721 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ddc7523ae323025c6035da055db9c099b9b434c59df935923f0eeb866d35155\": container with ID starting with 0ddc7523ae323025c6035da055db9c099b9b434c59df935923f0eeb866d35155 not found: ID does not exist" containerID="0ddc7523ae323025c6035da055db9c099b9b434c59df935923f0eeb866d35155" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.416756 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ddc7523ae323025c6035da055db9c099b9b434c59df935923f0eeb866d35155"} err="failed to get container status \"0ddc7523ae323025c6035da055db9c099b9b434c59df935923f0eeb866d35155\": rpc error: code = NotFound desc = could not find container \"0ddc7523ae323025c6035da055db9c099b9b434c59df935923f0eeb866d35155\": container with ID starting with 0ddc7523ae323025c6035da055db9c099b9b434c59df935923f0eeb866d35155 not found: ID does not exist" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.416778 4751 scope.go:117] "RemoveContainer" containerID="2ec7029e3614aaaf9779f2863efc1aa2a974de2154be7eae73a1e2c993e44e4f" Jan 30 22:56:46 crc kubenswrapper[4751]: E0130 22:56:46.417194 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ec7029e3614aaaf9779f2863efc1aa2a974de2154be7eae73a1e2c993e44e4f\": container with ID starting with 2ec7029e3614aaaf9779f2863efc1aa2a974de2154be7eae73a1e2c993e44e4f not found: ID does not exist" containerID="2ec7029e3614aaaf9779f2863efc1aa2a974de2154be7eae73a1e2c993e44e4f" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.417286 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ec7029e3614aaaf9779f2863efc1aa2a974de2154be7eae73a1e2c993e44e4f"} err="failed to get container status \"2ec7029e3614aaaf9779f2863efc1aa2a974de2154be7eae73a1e2c993e44e4f\": rpc error: code = NotFound desc = could not find container \"2ec7029e3614aaaf9779f2863efc1aa2a974de2154be7eae73a1e2c993e44e4f\": container with ID starting with 2ec7029e3614aaaf9779f2863efc1aa2a974de2154be7eae73a1e2c993e44e4f not found: ID does not exist" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.002478 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" path="/var/lib/kubelet/pods/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c/volumes" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.434722 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8nlg5"] Jan 30 22:56:48 crc kubenswrapper[4751]: E0130 22:56:48.435817 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="822b4327-52bb-4f05-a391-3afff2cfe815" containerName="extract-content" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.435851 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="822b4327-52bb-4f05-a391-3afff2cfe815" containerName="extract-content" Jan 30 22:56:48 crc kubenswrapper[4751]: E0130 22:56:48.435896 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerName="extract-content" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.435927 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerName="extract-content" Jan 30 22:56:48 crc kubenswrapper[4751]: E0130 22:56:48.435954 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerName="registry-server" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.435966 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerName="registry-server" Jan 30 22:56:48 crc kubenswrapper[4751]: E0130 22:56:48.435994 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="822b4327-52bb-4f05-a391-3afff2cfe815" containerName="registry-server" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.436005 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="822b4327-52bb-4f05-a391-3afff2cfe815" containerName="registry-server" Jan 30 22:56:48 crc kubenswrapper[4751]: E0130 22:56:48.436020 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerName="extract-utilities" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.436032 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerName="extract-utilities" Jan 30 22:56:48 crc kubenswrapper[4751]: E0130 22:56:48.436080 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="822b4327-52bb-4f05-a391-3afff2cfe815" containerName="extract-utilities" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.436090 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="822b4327-52bb-4f05-a391-3afff2cfe815" containerName="extract-utilities" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.436529 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerName="registry-server" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.436577 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="822b4327-52bb-4f05-a391-3afff2cfe815" containerName="registry-server" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.438857 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.456929 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8nlg5"] Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.494475 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c004278d-44c5-46da-9372-3773f2bd0c80-utilities\") pod \"certified-operators-8nlg5\" (UID: \"c004278d-44c5-46da-9372-3773f2bd0c80\") " pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.494717 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c004278d-44c5-46da-9372-3773f2bd0c80-catalog-content\") pod \"certified-operators-8nlg5\" (UID: \"c004278d-44c5-46da-9372-3773f2bd0c80\") " pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.494811 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5x5b\" (UniqueName: \"kubernetes.io/projected/c004278d-44c5-46da-9372-3773f2bd0c80-kube-api-access-w5x5b\") pod \"certified-operators-8nlg5\" (UID: \"c004278d-44c5-46da-9372-3773f2bd0c80\") " pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.597428 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5x5b\" (UniqueName: \"kubernetes.io/projected/c004278d-44c5-46da-9372-3773f2bd0c80-kube-api-access-w5x5b\") pod \"certified-operators-8nlg5\" (UID: \"c004278d-44c5-46da-9372-3773f2bd0c80\") " pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.597545 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c004278d-44c5-46da-9372-3773f2bd0c80-utilities\") pod \"certified-operators-8nlg5\" (UID: \"c004278d-44c5-46da-9372-3773f2bd0c80\") " pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.597680 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c004278d-44c5-46da-9372-3773f2bd0c80-catalog-content\") pod \"certified-operators-8nlg5\" (UID: \"c004278d-44c5-46da-9372-3773f2bd0c80\") " pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.597988 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c004278d-44c5-46da-9372-3773f2bd0c80-utilities\") pod \"certified-operators-8nlg5\" (UID: \"c004278d-44c5-46da-9372-3773f2bd0c80\") " pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.598063 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c004278d-44c5-46da-9372-3773f2bd0c80-catalog-content\") pod \"certified-operators-8nlg5\" (UID: \"c004278d-44c5-46da-9372-3773f2bd0c80\") " pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.622677 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5x5b\" (UniqueName: \"kubernetes.io/projected/c004278d-44c5-46da-9372-3773f2bd0c80-kube-api-access-w5x5b\") pod \"certified-operators-8nlg5\" (UID: \"c004278d-44c5-46da-9372-3773f2bd0c80\") " pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.775751 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:49 crc kubenswrapper[4751]: I0130 22:56:49.328653 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8nlg5"] Jan 30 22:56:50 crc kubenswrapper[4751]: I0130 22:56:50.343632 4751 generic.go:334] "Generic (PLEG): container finished" podID="c004278d-44c5-46da-9372-3773f2bd0c80" containerID="8d4c1f79e1a366d6c84305e6d2fbd5a9ecc7e8f5de5799a6f10128cf1a39a628" exitCode=0 Jan 30 22:56:50 crc kubenswrapper[4751]: I0130 22:56:50.343860 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8nlg5" event={"ID":"c004278d-44c5-46da-9372-3773f2bd0c80","Type":"ContainerDied","Data":"8d4c1f79e1a366d6c84305e6d2fbd5a9ecc7e8f5de5799a6f10128cf1a39a628"} Jan 30 22:56:50 crc kubenswrapper[4751]: I0130 22:56:50.344010 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8nlg5" event={"ID":"c004278d-44c5-46da-9372-3773f2bd0c80","Type":"ContainerStarted","Data":"6041280b56f65fdcdd4871be580592741f2e8226384f8c7ef10d68a5b4288cd0"} Jan 30 22:56:50 crc kubenswrapper[4751]: I0130 22:56:50.348076 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:56:51 crc kubenswrapper[4751]: I0130 22:56:51.356068 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8nlg5" event={"ID":"c004278d-44c5-46da-9372-3773f2bd0c80","Type":"ContainerStarted","Data":"8c03900c4c85ebffcec258e1dae8d8fefedd5bf403d712725a8faf7ba6c8a732"} Jan 30 22:56:53 crc kubenswrapper[4751]: I0130 22:56:53.379902 4751 generic.go:334] "Generic (PLEG): container finished" podID="c004278d-44c5-46da-9372-3773f2bd0c80" containerID="8c03900c4c85ebffcec258e1dae8d8fefedd5bf403d712725a8faf7ba6c8a732" exitCode=0 Jan 30 22:56:53 crc kubenswrapper[4751]: I0130 22:56:53.379998 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8nlg5" event={"ID":"c004278d-44c5-46da-9372-3773f2bd0c80","Type":"ContainerDied","Data":"8c03900c4c85ebffcec258e1dae8d8fefedd5bf403d712725a8faf7ba6c8a732"} Jan 30 22:56:54 crc kubenswrapper[4751]: I0130 22:56:54.126883 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:56:54 crc kubenswrapper[4751]: I0130 22:56:54.127417 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:56:54 crc kubenswrapper[4751]: I0130 22:56:54.127463 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 22:56:54 crc kubenswrapper[4751]: I0130 22:56:54.128541 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1f994498b8705c718253f1d686dfa142a31e491cf05bc7e00a9d3f4b2c57ea67"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:56:54 crc kubenswrapper[4751]: I0130 22:56:54.129016 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://1f994498b8705c718253f1d686dfa142a31e491cf05bc7e00a9d3f4b2c57ea67" gracePeriod=600 Jan 30 22:56:54 crc kubenswrapper[4751]: I0130 22:56:54.390936 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8nlg5" event={"ID":"c004278d-44c5-46da-9372-3773f2bd0c80","Type":"ContainerStarted","Data":"463f138b388c87d47e6e8f48e58f354e49ab3310ba323ddaa509ccb44e7b4d0f"} Jan 30 22:56:54 crc kubenswrapper[4751]: I0130 22:56:54.394449 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="1f994498b8705c718253f1d686dfa142a31e491cf05bc7e00a9d3f4b2c57ea67" exitCode=0 Jan 30 22:56:54 crc kubenswrapper[4751]: I0130 22:56:54.394490 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"1f994498b8705c718253f1d686dfa142a31e491cf05bc7e00a9d3f4b2c57ea67"} Jan 30 22:56:54 crc kubenswrapper[4751]: I0130 22:56:54.394526 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:56:54 crc kubenswrapper[4751]: I0130 22:56:54.500235 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8nlg5" podStartSLOduration=3.000526477 podStartE2EDuration="6.500209329s" podCreationTimestamp="2026-01-30 22:56:48 +0000 UTC" firstStartedPulling="2026-01-30 22:56:50.347220321 +0000 UTC m=+6149.093042980" lastFinishedPulling="2026-01-30 22:56:53.846903183 +0000 UTC m=+6152.592725832" observedRunningTime="2026-01-30 22:56:54.473060576 +0000 UTC m=+6153.218883225" watchObservedRunningTime="2026-01-30 22:56:54.500209329 +0000 UTC m=+6153.246031988" Jan 30 22:56:55 crc kubenswrapper[4751]: I0130 22:56:55.406528 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b"} Jan 30 22:56:58 crc kubenswrapper[4751]: I0130 22:56:58.776337 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:58 crc kubenswrapper[4751]: I0130 22:56:58.776913 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:58 crc kubenswrapper[4751]: I0130 22:56:58.829644 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:59 crc kubenswrapper[4751]: I0130 22:56:59.508617 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:59 crc kubenswrapper[4751]: I0130 22:56:59.574105 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8nlg5"] Jan 30 22:57:01 crc kubenswrapper[4751]: I0130 22:57:01.487374 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8nlg5" podUID="c004278d-44c5-46da-9372-3773f2bd0c80" containerName="registry-server" containerID="cri-o://463f138b388c87d47e6e8f48e58f354e49ab3310ba323ddaa509ccb44e7b4d0f" gracePeriod=2 Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.136695 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.224743 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c004278d-44c5-46da-9372-3773f2bd0c80-utilities\") pod \"c004278d-44c5-46da-9372-3773f2bd0c80\" (UID: \"c004278d-44c5-46da-9372-3773f2bd0c80\") " Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.224844 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5x5b\" (UniqueName: \"kubernetes.io/projected/c004278d-44c5-46da-9372-3773f2bd0c80-kube-api-access-w5x5b\") pod \"c004278d-44c5-46da-9372-3773f2bd0c80\" (UID: \"c004278d-44c5-46da-9372-3773f2bd0c80\") " Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.225008 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c004278d-44c5-46da-9372-3773f2bd0c80-catalog-content\") pod \"c004278d-44c5-46da-9372-3773f2bd0c80\" (UID: \"c004278d-44c5-46da-9372-3773f2bd0c80\") " Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.230243 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c004278d-44c5-46da-9372-3773f2bd0c80-utilities" (OuterVolumeSpecName: "utilities") pod "c004278d-44c5-46da-9372-3773f2bd0c80" (UID: "c004278d-44c5-46da-9372-3773f2bd0c80"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.234655 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c004278d-44c5-46da-9372-3773f2bd0c80-kube-api-access-w5x5b" (OuterVolumeSpecName: "kube-api-access-w5x5b") pod "c004278d-44c5-46da-9372-3773f2bd0c80" (UID: "c004278d-44c5-46da-9372-3773f2bd0c80"). InnerVolumeSpecName "kube-api-access-w5x5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.268358 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c004278d-44c5-46da-9372-3773f2bd0c80-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c004278d-44c5-46da-9372-3773f2bd0c80" (UID: "c004278d-44c5-46da-9372-3773f2bd0c80"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.327698 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5x5b\" (UniqueName: \"kubernetes.io/projected/c004278d-44c5-46da-9372-3773f2bd0c80-kube-api-access-w5x5b\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.327745 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c004278d-44c5-46da-9372-3773f2bd0c80-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.327759 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c004278d-44c5-46da-9372-3773f2bd0c80-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.552067 4751 generic.go:334] "Generic (PLEG): container finished" podID="c004278d-44c5-46da-9372-3773f2bd0c80" containerID="463f138b388c87d47e6e8f48e58f354e49ab3310ba323ddaa509ccb44e7b4d0f" exitCode=0 Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.552125 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8nlg5" event={"ID":"c004278d-44c5-46da-9372-3773f2bd0c80","Type":"ContainerDied","Data":"463f138b388c87d47e6e8f48e58f354e49ab3310ba323ddaa509ccb44e7b4d0f"} Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.552153 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8nlg5" event={"ID":"c004278d-44c5-46da-9372-3773f2bd0c80","Type":"ContainerDied","Data":"6041280b56f65fdcdd4871be580592741f2e8226384f8c7ef10d68a5b4288cd0"} Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.552171 4751 scope.go:117] "RemoveContainer" containerID="463f138b388c87d47e6e8f48e58f354e49ab3310ba323ddaa509ccb44e7b4d0f" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.552373 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.579189 4751 scope.go:117] "RemoveContainer" containerID="8c03900c4c85ebffcec258e1dae8d8fefedd5bf403d712725a8faf7ba6c8a732" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.599623 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8nlg5"] Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.608414 4751 scope.go:117] "RemoveContainer" containerID="8d4c1f79e1a366d6c84305e6d2fbd5a9ecc7e8f5de5799a6f10128cf1a39a628" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.623227 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8nlg5"] Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.667741 4751 scope.go:117] "RemoveContainer" containerID="463f138b388c87d47e6e8f48e58f354e49ab3310ba323ddaa509ccb44e7b4d0f" Jan 30 22:57:02 crc kubenswrapper[4751]: E0130 22:57:02.668180 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"463f138b388c87d47e6e8f48e58f354e49ab3310ba323ddaa509ccb44e7b4d0f\": container with ID starting with 463f138b388c87d47e6e8f48e58f354e49ab3310ba323ddaa509ccb44e7b4d0f not found: ID does not exist" containerID="463f138b388c87d47e6e8f48e58f354e49ab3310ba323ddaa509ccb44e7b4d0f" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.668219 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"463f138b388c87d47e6e8f48e58f354e49ab3310ba323ddaa509ccb44e7b4d0f"} err="failed to get container status \"463f138b388c87d47e6e8f48e58f354e49ab3310ba323ddaa509ccb44e7b4d0f\": rpc error: code = NotFound desc = could not find container \"463f138b388c87d47e6e8f48e58f354e49ab3310ba323ddaa509ccb44e7b4d0f\": container with ID starting with 463f138b388c87d47e6e8f48e58f354e49ab3310ba323ddaa509ccb44e7b4d0f not found: ID does not exist" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.668244 4751 scope.go:117] "RemoveContainer" containerID="8c03900c4c85ebffcec258e1dae8d8fefedd5bf403d712725a8faf7ba6c8a732" Jan 30 22:57:02 crc kubenswrapper[4751]: E0130 22:57:02.668481 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c03900c4c85ebffcec258e1dae8d8fefedd5bf403d712725a8faf7ba6c8a732\": container with ID starting with 8c03900c4c85ebffcec258e1dae8d8fefedd5bf403d712725a8faf7ba6c8a732 not found: ID does not exist" containerID="8c03900c4c85ebffcec258e1dae8d8fefedd5bf403d712725a8faf7ba6c8a732" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.668503 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c03900c4c85ebffcec258e1dae8d8fefedd5bf403d712725a8faf7ba6c8a732"} err="failed to get container status \"8c03900c4c85ebffcec258e1dae8d8fefedd5bf403d712725a8faf7ba6c8a732\": rpc error: code = NotFound desc = could not find container \"8c03900c4c85ebffcec258e1dae8d8fefedd5bf403d712725a8faf7ba6c8a732\": container with ID starting with 8c03900c4c85ebffcec258e1dae8d8fefedd5bf403d712725a8faf7ba6c8a732 not found: ID does not exist" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.668518 4751 scope.go:117] "RemoveContainer" containerID="8d4c1f79e1a366d6c84305e6d2fbd5a9ecc7e8f5de5799a6f10128cf1a39a628" Jan 30 22:57:02 crc kubenswrapper[4751]: E0130 22:57:02.668747 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d4c1f79e1a366d6c84305e6d2fbd5a9ecc7e8f5de5799a6f10128cf1a39a628\": container with ID starting with 8d4c1f79e1a366d6c84305e6d2fbd5a9ecc7e8f5de5799a6f10128cf1a39a628 not found: ID does not exist" containerID="8d4c1f79e1a366d6c84305e6d2fbd5a9ecc7e8f5de5799a6f10128cf1a39a628" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.668780 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d4c1f79e1a366d6c84305e6d2fbd5a9ecc7e8f5de5799a6f10128cf1a39a628"} err="failed to get container status \"8d4c1f79e1a366d6c84305e6d2fbd5a9ecc7e8f5de5799a6f10128cf1a39a628\": rpc error: code = NotFound desc = could not find container \"8d4c1f79e1a366d6c84305e6d2fbd5a9ecc7e8f5de5799a6f10128cf1a39a628\": container with ID starting with 8d4c1f79e1a366d6c84305e6d2fbd5a9ecc7e8f5de5799a6f10128cf1a39a628 not found: ID does not exist" Jan 30 22:57:03 crc kubenswrapper[4751]: I0130 22:57:03.987043 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c004278d-44c5-46da-9372-3773f2bd0c80" path="/var/lib/kubelet/pods/c004278d-44c5-46da-9372-3773f2bd0c80/volumes" Jan 30 22:57:12 crc kubenswrapper[4751]: I0130 22:57:12.731829 4751 generic.go:334] "Generic (PLEG): container finished" podID="01b77c61-26de-47b9-a360-961173e352c9" containerID="c96911f8f05d80b4e318ee559b561fbac28c80da80508a7352f791c9f10292c8" exitCode=0 Jan 30 22:57:12 crc kubenswrapper[4751]: I0130 22:57:12.731937 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-89qbh/crc-debug-q2xqd" event={"ID":"01b77c61-26de-47b9-a360-961173e352c9","Type":"ContainerDied","Data":"c96911f8f05d80b4e318ee559b561fbac28c80da80508a7352f791c9f10292c8"} Jan 30 22:57:13 crc kubenswrapper[4751]: I0130 22:57:13.862886 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/crc-debug-q2xqd" Jan 30 22:57:13 crc kubenswrapper[4751]: I0130 22:57:13.907232 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-89qbh/crc-debug-q2xqd"] Jan 30 22:57:13 crc kubenswrapper[4751]: I0130 22:57:13.918428 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-89qbh/crc-debug-q2xqd"] Jan 30 22:57:13 crc kubenswrapper[4751]: I0130 22:57:13.921156 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwq9j\" (UniqueName: \"kubernetes.io/projected/01b77c61-26de-47b9-a360-961173e352c9-kube-api-access-rwq9j\") pod \"01b77c61-26de-47b9-a360-961173e352c9\" (UID: \"01b77c61-26de-47b9-a360-961173e352c9\") " Jan 30 22:57:13 crc kubenswrapper[4751]: I0130 22:57:13.921417 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01b77c61-26de-47b9-a360-961173e352c9-host\") pod \"01b77c61-26de-47b9-a360-961173e352c9\" (UID: \"01b77c61-26de-47b9-a360-961173e352c9\") " Jan 30 22:57:13 crc kubenswrapper[4751]: I0130 22:57:13.921533 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01b77c61-26de-47b9-a360-961173e352c9-host" (OuterVolumeSpecName: "host") pod "01b77c61-26de-47b9-a360-961173e352c9" (UID: "01b77c61-26de-47b9-a360-961173e352c9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:57:13 crc kubenswrapper[4751]: I0130 22:57:13.922425 4751 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01b77c61-26de-47b9-a360-961173e352c9-host\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:13 crc kubenswrapper[4751]: I0130 22:57:13.928562 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01b77c61-26de-47b9-a360-961173e352c9-kube-api-access-rwq9j" (OuterVolumeSpecName: "kube-api-access-rwq9j") pod "01b77c61-26de-47b9-a360-961173e352c9" (UID: "01b77c61-26de-47b9-a360-961173e352c9"). InnerVolumeSpecName "kube-api-access-rwq9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:57:13 crc kubenswrapper[4751]: I0130 22:57:13.994738 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01b77c61-26de-47b9-a360-961173e352c9" path="/var/lib/kubelet/pods/01b77c61-26de-47b9-a360-961173e352c9/volumes" Jan 30 22:57:14 crc kubenswrapper[4751]: I0130 22:57:14.025204 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwq9j\" (UniqueName: \"kubernetes.io/projected/01b77c61-26de-47b9-a360-961173e352c9-kube-api-access-rwq9j\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:14 crc kubenswrapper[4751]: I0130 22:57:14.754410 4751 scope.go:117] "RemoveContainer" containerID="c96911f8f05d80b4e318ee559b561fbac28c80da80508a7352f791c9f10292c8" Jan 30 22:57:14 crc kubenswrapper[4751]: I0130 22:57:14.754439 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/crc-debug-q2xqd" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.128763 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-89qbh/crc-debug-jpv4n"] Jan 30 22:57:15 crc kubenswrapper[4751]: E0130 22:57:15.129481 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c004278d-44c5-46da-9372-3773f2bd0c80" containerName="extract-utilities" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.129494 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c004278d-44c5-46da-9372-3773f2bd0c80" containerName="extract-utilities" Jan 30 22:57:15 crc kubenswrapper[4751]: E0130 22:57:15.129510 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c004278d-44c5-46da-9372-3773f2bd0c80" containerName="registry-server" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.129516 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c004278d-44c5-46da-9372-3773f2bd0c80" containerName="registry-server" Jan 30 22:57:15 crc kubenswrapper[4751]: E0130 22:57:15.129532 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01b77c61-26de-47b9-a360-961173e352c9" containerName="container-00" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.129538 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="01b77c61-26de-47b9-a360-961173e352c9" containerName="container-00" Jan 30 22:57:15 crc kubenswrapper[4751]: E0130 22:57:15.129553 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c004278d-44c5-46da-9372-3773f2bd0c80" containerName="extract-content" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.129559 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c004278d-44c5-46da-9372-3773f2bd0c80" containerName="extract-content" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.129807 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="01b77c61-26de-47b9-a360-961173e352c9" containerName="container-00" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.129818 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="c004278d-44c5-46da-9372-3773f2bd0c80" containerName="registry-server" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.130657 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/crc-debug-jpv4n" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.133144 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-89qbh"/"default-dockercfg-62sr4" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.257984 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln8lz\" (UniqueName: \"kubernetes.io/projected/1fcbb32f-05b2-4221-898b-83822813a738-kube-api-access-ln8lz\") pod \"crc-debug-jpv4n\" (UID: \"1fcbb32f-05b2-4221-898b-83822813a738\") " pod="openshift-must-gather-89qbh/crc-debug-jpv4n" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.258208 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1fcbb32f-05b2-4221-898b-83822813a738-host\") pod \"crc-debug-jpv4n\" (UID: \"1fcbb32f-05b2-4221-898b-83822813a738\") " pod="openshift-must-gather-89qbh/crc-debug-jpv4n" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.361207 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln8lz\" (UniqueName: \"kubernetes.io/projected/1fcbb32f-05b2-4221-898b-83822813a738-kube-api-access-ln8lz\") pod \"crc-debug-jpv4n\" (UID: \"1fcbb32f-05b2-4221-898b-83822813a738\") " pod="openshift-must-gather-89qbh/crc-debug-jpv4n" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.361310 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1fcbb32f-05b2-4221-898b-83822813a738-host\") pod \"crc-debug-jpv4n\" (UID: \"1fcbb32f-05b2-4221-898b-83822813a738\") " pod="openshift-must-gather-89qbh/crc-debug-jpv4n" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.361435 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1fcbb32f-05b2-4221-898b-83822813a738-host\") pod \"crc-debug-jpv4n\" (UID: \"1fcbb32f-05b2-4221-898b-83822813a738\") " pod="openshift-must-gather-89qbh/crc-debug-jpv4n" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.378557 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln8lz\" (UniqueName: \"kubernetes.io/projected/1fcbb32f-05b2-4221-898b-83822813a738-kube-api-access-ln8lz\") pod \"crc-debug-jpv4n\" (UID: \"1fcbb32f-05b2-4221-898b-83822813a738\") " pod="openshift-must-gather-89qbh/crc-debug-jpv4n" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.454695 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/crc-debug-jpv4n" Jan 30 22:57:15 crc kubenswrapper[4751]: W0130 22:57:15.500916 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fcbb32f_05b2_4221_898b_83822813a738.slice/crio-407d0704b0d1230a5ffd41fc5a1a7acf15dabbdb5d64eeddeea64e42546abfbe WatchSource:0}: Error finding container 407d0704b0d1230a5ffd41fc5a1a7acf15dabbdb5d64eeddeea64e42546abfbe: Status 404 returned error can't find the container with id 407d0704b0d1230a5ffd41fc5a1a7acf15dabbdb5d64eeddeea64e42546abfbe Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.767452 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-89qbh/crc-debug-jpv4n" event={"ID":"1fcbb32f-05b2-4221-898b-83822813a738","Type":"ContainerStarted","Data":"407d0704b0d1230a5ffd41fc5a1a7acf15dabbdb5d64eeddeea64e42546abfbe"} Jan 30 22:57:16 crc kubenswrapper[4751]: I0130 22:57:16.783087 4751 generic.go:334] "Generic (PLEG): container finished" podID="1fcbb32f-05b2-4221-898b-83822813a738" containerID="77378602a044fe43cd54d596511e6b12c14155716b7a67523c049f4f81292b13" exitCode=0 Jan 30 22:57:16 crc kubenswrapper[4751]: I0130 22:57:16.783147 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-89qbh/crc-debug-jpv4n" event={"ID":"1fcbb32f-05b2-4221-898b-83822813a738","Type":"ContainerDied","Data":"77378602a044fe43cd54d596511e6b12c14155716b7a67523c049f4f81292b13"} Jan 30 22:57:17 crc kubenswrapper[4751]: I0130 22:57:17.947843 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/crc-debug-jpv4n" Jan 30 22:57:18 crc kubenswrapper[4751]: I0130 22:57:18.023258 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1fcbb32f-05b2-4221-898b-83822813a738-host\") pod \"1fcbb32f-05b2-4221-898b-83822813a738\" (UID: \"1fcbb32f-05b2-4221-898b-83822813a738\") " Jan 30 22:57:18 crc kubenswrapper[4751]: I0130 22:57:18.023707 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln8lz\" (UniqueName: \"kubernetes.io/projected/1fcbb32f-05b2-4221-898b-83822813a738-kube-api-access-ln8lz\") pod \"1fcbb32f-05b2-4221-898b-83822813a738\" (UID: \"1fcbb32f-05b2-4221-898b-83822813a738\") " Jan 30 22:57:18 crc kubenswrapper[4751]: I0130 22:57:18.024149 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1fcbb32f-05b2-4221-898b-83822813a738-host" (OuterVolumeSpecName: "host") pod "1fcbb32f-05b2-4221-898b-83822813a738" (UID: "1fcbb32f-05b2-4221-898b-83822813a738"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:57:18 crc kubenswrapper[4751]: I0130 22:57:18.026684 4751 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1fcbb32f-05b2-4221-898b-83822813a738-host\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:18 crc kubenswrapper[4751]: I0130 22:57:18.030613 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fcbb32f-05b2-4221-898b-83822813a738-kube-api-access-ln8lz" (OuterVolumeSpecName: "kube-api-access-ln8lz") pod "1fcbb32f-05b2-4221-898b-83822813a738" (UID: "1fcbb32f-05b2-4221-898b-83822813a738"). InnerVolumeSpecName "kube-api-access-ln8lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:57:18 crc kubenswrapper[4751]: I0130 22:57:18.128376 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln8lz\" (UniqueName: \"kubernetes.io/projected/1fcbb32f-05b2-4221-898b-83822813a738-kube-api-access-ln8lz\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:18 crc kubenswrapper[4751]: I0130 22:57:18.815166 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-89qbh/crc-debug-jpv4n" event={"ID":"1fcbb32f-05b2-4221-898b-83822813a738","Type":"ContainerDied","Data":"407d0704b0d1230a5ffd41fc5a1a7acf15dabbdb5d64eeddeea64e42546abfbe"} Jan 30 22:57:18 crc kubenswrapper[4751]: I0130 22:57:18.815205 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/crc-debug-jpv4n" Jan 30 22:57:18 crc kubenswrapper[4751]: I0130 22:57:18.815215 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="407d0704b0d1230a5ffd41fc5a1a7acf15dabbdb5d64eeddeea64e42546abfbe" Jan 30 22:57:19 crc kubenswrapper[4751]: I0130 22:57:19.276952 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-89qbh/crc-debug-jpv4n"] Jan 30 22:57:19 crc kubenswrapper[4751]: I0130 22:57:19.287286 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-89qbh/crc-debug-jpv4n"] Jan 30 22:57:19 crc kubenswrapper[4751]: I0130 22:57:19.990246 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fcbb32f-05b2-4221-898b-83822813a738" path="/var/lib/kubelet/pods/1fcbb32f-05b2-4221-898b-83822813a738/volumes" Jan 30 22:57:20 crc kubenswrapper[4751]: I0130 22:57:20.471356 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-89qbh/crc-debug-9zdxm"] Jan 30 22:57:20 crc kubenswrapper[4751]: E0130 22:57:20.471906 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fcbb32f-05b2-4221-898b-83822813a738" containerName="container-00" Jan 30 22:57:20 crc kubenswrapper[4751]: I0130 22:57:20.471921 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fcbb32f-05b2-4221-898b-83822813a738" containerName="container-00" Jan 30 22:57:20 crc kubenswrapper[4751]: I0130 22:57:20.472132 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fcbb32f-05b2-4221-898b-83822813a738" containerName="container-00" Jan 30 22:57:20 crc kubenswrapper[4751]: I0130 22:57:20.473000 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/crc-debug-9zdxm" Jan 30 22:57:20 crc kubenswrapper[4751]: I0130 22:57:20.474920 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-89qbh"/"default-dockercfg-62sr4" Jan 30 22:57:20 crc kubenswrapper[4751]: I0130 22:57:20.593864 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzjc2\" (UniqueName: \"kubernetes.io/projected/e60a3a88-2975-458f-a5e9-422a6c519f65-kube-api-access-vzjc2\") pod \"crc-debug-9zdxm\" (UID: \"e60a3a88-2975-458f-a5e9-422a6c519f65\") " pod="openshift-must-gather-89qbh/crc-debug-9zdxm" Jan 30 22:57:20 crc kubenswrapper[4751]: I0130 22:57:20.594321 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e60a3a88-2975-458f-a5e9-422a6c519f65-host\") pod \"crc-debug-9zdxm\" (UID: \"e60a3a88-2975-458f-a5e9-422a6c519f65\") " pod="openshift-must-gather-89qbh/crc-debug-9zdxm" Jan 30 22:57:20 crc kubenswrapper[4751]: I0130 22:57:20.696144 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzjc2\" (UniqueName: \"kubernetes.io/projected/e60a3a88-2975-458f-a5e9-422a6c519f65-kube-api-access-vzjc2\") pod \"crc-debug-9zdxm\" (UID: \"e60a3a88-2975-458f-a5e9-422a6c519f65\") " pod="openshift-must-gather-89qbh/crc-debug-9zdxm" Jan 30 22:57:20 crc kubenswrapper[4751]: I0130 22:57:20.696448 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e60a3a88-2975-458f-a5e9-422a6c519f65-host\") pod \"crc-debug-9zdxm\" (UID: \"e60a3a88-2975-458f-a5e9-422a6c519f65\") " pod="openshift-must-gather-89qbh/crc-debug-9zdxm" Jan 30 22:57:20 crc kubenswrapper[4751]: I0130 22:57:20.696707 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e60a3a88-2975-458f-a5e9-422a6c519f65-host\") pod \"crc-debug-9zdxm\" (UID: \"e60a3a88-2975-458f-a5e9-422a6c519f65\") " pod="openshift-must-gather-89qbh/crc-debug-9zdxm" Jan 30 22:57:20 crc kubenswrapper[4751]: I0130 22:57:20.715968 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzjc2\" (UniqueName: \"kubernetes.io/projected/e60a3a88-2975-458f-a5e9-422a6c519f65-kube-api-access-vzjc2\") pod \"crc-debug-9zdxm\" (UID: \"e60a3a88-2975-458f-a5e9-422a6c519f65\") " pod="openshift-must-gather-89qbh/crc-debug-9zdxm" Jan 30 22:57:20 crc kubenswrapper[4751]: I0130 22:57:20.791521 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/crc-debug-9zdxm" Jan 30 22:57:20 crc kubenswrapper[4751]: I0130 22:57:20.840184 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-89qbh/crc-debug-9zdxm" event={"ID":"e60a3a88-2975-458f-a5e9-422a6c519f65","Type":"ContainerStarted","Data":"737ab55ebd2abf50c9643b0c5e1d3b6670f98d11955015081ef7846fe31b41ce"} Jan 30 22:57:21 crc kubenswrapper[4751]: I0130 22:57:21.889037 4751 generic.go:334] "Generic (PLEG): container finished" podID="e60a3a88-2975-458f-a5e9-422a6c519f65" containerID="be6e18572bbebc1a7aec700bd2eb90d12dc04a78e2daff85c59a029c18a1fcc3" exitCode=0 Jan 30 22:57:21 crc kubenswrapper[4751]: I0130 22:57:21.889291 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-89qbh/crc-debug-9zdxm" event={"ID":"e60a3a88-2975-458f-a5e9-422a6c519f65","Type":"ContainerDied","Data":"be6e18572bbebc1a7aec700bd2eb90d12dc04a78e2daff85c59a029c18a1fcc3"} Jan 30 22:57:22 crc kubenswrapper[4751]: I0130 22:57:22.035736 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-89qbh/crc-debug-9zdxm"] Jan 30 22:57:22 crc kubenswrapper[4751]: I0130 22:57:22.047384 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-89qbh/crc-debug-9zdxm"] Jan 30 22:57:23 crc kubenswrapper[4751]: I0130 22:57:23.025308 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/crc-debug-9zdxm" Jan 30 22:57:23 crc kubenswrapper[4751]: I0130 22:57:23.169743 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzjc2\" (UniqueName: \"kubernetes.io/projected/e60a3a88-2975-458f-a5e9-422a6c519f65-kube-api-access-vzjc2\") pod \"e60a3a88-2975-458f-a5e9-422a6c519f65\" (UID: \"e60a3a88-2975-458f-a5e9-422a6c519f65\") " Jan 30 22:57:23 crc kubenswrapper[4751]: I0130 22:57:23.169848 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e60a3a88-2975-458f-a5e9-422a6c519f65-host\") pod \"e60a3a88-2975-458f-a5e9-422a6c519f65\" (UID: \"e60a3a88-2975-458f-a5e9-422a6c519f65\") " Jan 30 22:57:23 crc kubenswrapper[4751]: I0130 22:57:23.170239 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e60a3a88-2975-458f-a5e9-422a6c519f65-host" (OuterVolumeSpecName: "host") pod "e60a3a88-2975-458f-a5e9-422a6c519f65" (UID: "e60a3a88-2975-458f-a5e9-422a6c519f65"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:57:23 crc kubenswrapper[4751]: I0130 22:57:23.170685 4751 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e60a3a88-2975-458f-a5e9-422a6c519f65-host\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:23 crc kubenswrapper[4751]: I0130 22:57:23.182005 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e60a3a88-2975-458f-a5e9-422a6c519f65-kube-api-access-vzjc2" (OuterVolumeSpecName: "kube-api-access-vzjc2") pod "e60a3a88-2975-458f-a5e9-422a6c519f65" (UID: "e60a3a88-2975-458f-a5e9-422a6c519f65"). InnerVolumeSpecName "kube-api-access-vzjc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:57:23 crc kubenswrapper[4751]: I0130 22:57:23.272956 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzjc2\" (UniqueName: \"kubernetes.io/projected/e60a3a88-2975-458f-a5e9-422a6c519f65-kube-api-access-vzjc2\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:23 crc kubenswrapper[4751]: I0130 22:57:23.921125 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="737ab55ebd2abf50c9643b0c5e1d3b6670f98d11955015081ef7846fe31b41ce" Jan 30 22:57:23 crc kubenswrapper[4751]: I0130 22:57:23.921177 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/crc-debug-9zdxm" Jan 30 22:57:23 crc kubenswrapper[4751]: I0130 22:57:23.990760 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e60a3a88-2975-458f-a5e9-422a6c519f65" path="/var/lib/kubelet/pods/e60a3a88-2975-458f-a5e9-422a6c519f65/volumes" Jan 30 22:57:48 crc kubenswrapper[4751]: I0130 22:57:48.682078 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_0c9eccf2-9252-4f35-9aff-56f0e15102a1/aodh-api/0.log" Jan 30 22:57:48 crc kubenswrapper[4751]: I0130 22:57:48.854185 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_0c9eccf2-9252-4f35-9aff-56f0e15102a1/aodh-evaluator/0.log" Jan 30 22:57:48 crc kubenswrapper[4751]: I0130 22:57:48.855070 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_0c9eccf2-9252-4f35-9aff-56f0e15102a1/aodh-listener/0.log" Jan 30 22:57:48 crc kubenswrapper[4751]: I0130 22:57:48.918279 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_0c9eccf2-9252-4f35-9aff-56f0e15102a1/aodh-notifier/0.log" Jan 30 22:57:49 crc kubenswrapper[4751]: I0130 22:57:49.173581 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-b7f497ffb-fkntp_1a2838e6-7563-4e97-893d-58d8619b780b/barbican-api-log/0.log" Jan 30 22:57:49 crc kubenswrapper[4751]: I0130 22:57:49.186923 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-b7f497ffb-fkntp_1a2838e6-7563-4e97-893d-58d8619b780b/barbican-api/0.log" Jan 30 22:57:49 crc kubenswrapper[4751]: I0130 22:57:49.358986 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-56b859c9db-tvldd_334843b7-3c66-42fa-8880-4337946df593/barbican-keystone-listener/0.log" Jan 30 22:57:49 crc kubenswrapper[4751]: I0130 22:57:49.508176 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-56b859c9db-tvldd_334843b7-3c66-42fa-8880-4337946df593/barbican-keystone-listener-log/0.log" Jan 30 22:57:49 crc kubenswrapper[4751]: I0130 22:57:49.559487 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5fd66f57b7-5jqls_76562ec1-fb40-4590-9d96-f05cafc13640/barbican-worker/0.log" Jan 30 22:57:49 crc kubenswrapper[4751]: I0130 22:57:49.601875 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5fd66f57b7-5jqls_76562ec1-fb40-4590-9d96-f05cafc13640/barbican-worker-log/0.log" Jan 30 22:57:49 crc kubenswrapper[4751]: I0130 22:57:49.761791 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl_25d1f8e8-75ed-46ae-b674-87f34c4edbfa/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:57:49 crc kubenswrapper[4751]: I0130 22:57:49.902991 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c69dc070-7de6-4681-a44b-6e2007a7f671/ceilometer-central-agent/0.log" Jan 30 22:57:50 crc kubenswrapper[4751]: I0130 22:57:50.008985 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c69dc070-7de6-4681-a44b-6e2007a7f671/proxy-httpd/0.log" Jan 30 22:57:50 crc kubenswrapper[4751]: I0130 22:57:50.059882 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c69dc070-7de6-4681-a44b-6e2007a7f671/ceilometer-notification-agent/0.log" Jan 30 22:57:50 crc kubenswrapper[4751]: I0130 22:57:50.126164 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c69dc070-7de6-4681-a44b-6e2007a7f671/sg-core/0.log" Jan 30 22:57:50 crc kubenswrapper[4751]: I0130 22:57:50.308061 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e741273e-caa0-4a2c-9ed0-6bae195052ce/cinder-api-log/0.log" Jan 30 22:57:50 crc kubenswrapper[4751]: I0130 22:57:50.312820 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e741273e-caa0-4a2c-9ed0-6bae195052ce/cinder-api/0.log" Jan 30 22:57:50 crc kubenswrapper[4751]: I0130 22:57:50.482010 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56/cinder-scheduler/0.log" Jan 30 22:57:50 crc kubenswrapper[4751]: I0130 22:57:50.574293 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56/probe/0.log" Jan 30 22:57:50 crc kubenswrapper[4751]: I0130 22:57:50.646374 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz_a21b5781-ce12-434c-9f38-47bf5f6ad332/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:57:50 crc kubenswrapper[4751]: I0130 22:57:50.800953 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-czgz2_39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:57:50 crc kubenswrapper[4751]: I0130 22:57:50.901762 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-7h4pb_1eb1b0d1-2407-440a-826b-b5158aab8be3/init/0.log" Jan 30 22:57:51 crc kubenswrapper[4751]: I0130 22:57:51.113948 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-7h4pb_1eb1b0d1-2407-440a-826b-b5158aab8be3/init/0.log" Jan 30 22:57:51 crc kubenswrapper[4751]: I0130 22:57:51.171586 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-czw8p_b45d4d88-6b91-4bfc-9619-68fdb7d90f05/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:57:51 crc kubenswrapper[4751]: I0130 22:57:51.195289 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-7h4pb_1eb1b0d1-2407-440a-826b-b5158aab8be3/dnsmasq-dns/0.log" Jan 30 22:57:51 crc kubenswrapper[4751]: I0130 22:57:51.394260 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cef73daf-a49c-4b32-8ebc-fe0adf90df58/glance-httpd/0.log" Jan 30 22:57:51 crc kubenswrapper[4751]: I0130 22:57:51.426773 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cef73daf-a49c-4b32-8ebc-fe0adf90df58/glance-log/0.log" Jan 30 22:57:51 crc kubenswrapper[4751]: I0130 22:57:51.611406 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4dcf400d-5171-4388-bfbc-18d62a106a12/glance-log/0.log" Jan 30 22:57:51 crc kubenswrapper[4751]: I0130 22:57:51.614062 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4dcf400d-5171-4388-bfbc-18d62a106a12/glance-httpd/0.log" Jan 30 22:57:52 crc kubenswrapper[4751]: I0130 22:57:52.347873 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-68c4b8fdd-wvfwg_ce637680-0e89-4089-bbb7-704117a5dcb0/heat-api/0.log" Jan 30 22:57:52 crc kubenswrapper[4751]: I0130 22:57:52.522603 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-75666c8dc5-6rmsl_3100f81b-465d-42f8-9bbd-88e0aecbdc56/heat-cfnapi/0.log" Jan 30 22:57:52 crc kubenswrapper[4751]: I0130 22:57:52.606743 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-7ccc7fc744-trd9b_2465732f-6109-4d66-84c4-f08a6a1ac472/heat-engine/0.log" Jan 30 22:57:52 crc kubenswrapper[4751]: I0130 22:57:52.664799 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d_d9b249ee-25bd-4b25-aaaf-57c3a55dad1f/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:57:52 crc kubenswrapper[4751]: I0130 22:57:52.743857 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-trgr7_aa80e137-3a03-4857-9ec0-aa2f9b58df0d/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:57:52 crc kubenswrapper[4751]: I0130 22:57:52.951075 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29496841-qnsrj_ec292c3e-470e-4f61-92e9-4e2c8098f879/keystone-cron/0.log" Jan 30 22:57:53 crc kubenswrapper[4751]: I0130 22:57:53.069685 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_dc0cb07e-2f77-49e2-931f-c896c3962f9d/kube-state-metrics/0.log" Jan 30 22:57:53 crc kubenswrapper[4751]: I0130 22:57:53.260240 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-9j62f_64c0e484-536b-4bf5-9f35-2bfc04b14133/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:57:53 crc kubenswrapper[4751]: I0130 22:57:53.369389 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-55986d9fc9-zjsx4_aab674da-e1ff-4881-9432-fad6b85111f2/keystone-api/0.log" Jan 30 22:57:53 crc kubenswrapper[4751]: I0130 22:57:53.405127 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-jn6hm_61149618-7cc3-4dd6-b61a-0fb8226f2cc1/logging-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:57:53 crc kubenswrapper[4751]: I0130 22:57:53.547997 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_e7f85043-bc84-41e2-9f14-a08f96da06f2/mysqld-exporter/0.log" Jan 30 22:57:53 crc kubenswrapper[4751]: I0130 22:57:53.984254 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6989c95c85-6thsl_68910b8d-2ec3-4b7c-956c-e3d3518042cf/neutron-httpd/0.log" Jan 30 22:57:54 crc kubenswrapper[4751]: I0130 22:57:54.026408 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj_9d2edd75-7066-43c1-9636-149a176ee575/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:57:54 crc kubenswrapper[4751]: I0130 22:57:54.117744 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6989c95c85-6thsl_68910b8d-2ec3-4b7c-956c-e3d3518042cf/neutron-api/0.log" Jan 30 22:57:54 crc kubenswrapper[4751]: I0130 22:57:54.648058 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_9bb304d7-db8e-4943-b0bc-d30a4332df91/nova-cell0-conductor-conductor/0.log" Jan 30 22:57:54 crc kubenswrapper[4751]: I0130 22:57:54.928020 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e3c7d82f-3209-44cf-a463-9affaab3de75/nova-api-log/0.log" Jan 30 22:57:55 crc kubenswrapper[4751]: I0130 22:57:55.001672 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_6c1153d5-e025-439d-9799-8bf38014a585/nova-cell1-conductor-conductor/0.log" Jan 30 22:57:55 crc kubenswrapper[4751]: I0130 22:57:55.269340 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_150d4911-b366-4c81-b4fa-b5c5e8cadc78/nova-cell1-novncproxy-novncproxy/0.log" Jan 30 22:57:55 crc kubenswrapper[4751]: I0130 22:57:55.320363 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-gjscv_7165caae-e471-463b-9f66-be7fb4c7c463/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:57:55 crc kubenswrapper[4751]: I0130 22:57:55.365560 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e3c7d82f-3209-44cf-a463-9affaab3de75/nova-api-api/0.log" Jan 30 22:57:55 crc kubenswrapper[4751]: I0130 22:57:55.569132 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_179951f5-39be-43d7-a2fa-3c6f04555760/nova-metadata-log/0.log" Jan 30 22:57:55 crc kubenswrapper[4751]: I0130 22:57:55.976566 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32/mysql-bootstrap/0.log" Jan 30 22:57:56 crc kubenswrapper[4751]: I0130 22:57:56.027734 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_977b9205-4c23-4ff0-9193-5938e4b87c64/nova-scheduler-scheduler/0.log" Jan 30 22:57:56 crc kubenswrapper[4751]: I0130 22:57:56.153389 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32/mysql-bootstrap/0.log" Jan 30 22:57:56 crc kubenswrapper[4751]: I0130 22:57:56.319271 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32/galera/0.log" Jan 30 22:57:56 crc kubenswrapper[4751]: I0130 22:57:56.433885 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d55cd7e5-6799-4e1a-9f3b-a92937aca796/mysql-bootstrap/0.log" Jan 30 22:57:57 crc kubenswrapper[4751]: I0130 22:57:57.041269 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d55cd7e5-6799-4e1a-9f3b-a92937aca796/mysql-bootstrap/0.log" Jan 30 22:57:57 crc kubenswrapper[4751]: I0130 22:57:57.059186 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d55cd7e5-6799-4e1a-9f3b-a92937aca796/galera/0.log" Jan 30 22:57:57 crc kubenswrapper[4751]: I0130 22:57:57.305275 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_af93872a-62a1-407c-9932-2afb4313f457/openstackclient/0.log" Jan 30 22:57:57 crc kubenswrapper[4751]: I0130 22:57:57.362012 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-g9s48_fbc382fd-1513-4137-b801-5627cc5886ea/ovn-controller/0.log" Jan 30 22:57:57 crc kubenswrapper[4751]: I0130 22:57:57.625938 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-6bddb_e60cf673-3513-4af6-ac72-280908e95405/openstack-network-exporter/0.log" Jan 30 22:57:57 crc kubenswrapper[4751]: I0130 22:57:57.883091 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f4rx8_071bab49-34f0-4fef-849e-c2530b4c423c/ovsdb-server-init/0.log" Jan 30 22:57:58 crc kubenswrapper[4751]: I0130 22:57:58.072367 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_179951f5-39be-43d7-a2fa-3c6f04555760/nova-metadata-metadata/0.log" Jan 30 22:57:58 crc kubenswrapper[4751]: I0130 22:57:58.190757 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f4rx8_071bab49-34f0-4fef-849e-c2530b4c423c/ovsdb-server-init/0.log" Jan 30 22:57:58 crc kubenswrapper[4751]: I0130 22:57:58.193313 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f4rx8_071bab49-34f0-4fef-849e-c2530b4c423c/ovsdb-server/0.log" Jan 30 22:57:58 crc kubenswrapper[4751]: I0130 22:57:58.197838 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f4rx8_071bab49-34f0-4fef-849e-c2530b4c423c/ovs-vswitchd/0.log" Jan 30 22:57:58 crc kubenswrapper[4751]: I0130 22:57:58.463479 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f31a7def-755f-49e8-bf97-7e155bcc5113/openstack-network-exporter/0.log" Jan 30 22:57:58 crc kubenswrapper[4751]: I0130 22:57:58.465936 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-g7v98_43548d7f-01a0-4905-a26d-424ba948cbe8/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:57:58 crc kubenswrapper[4751]: I0130 22:57:58.646859 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f31a7def-755f-49e8-bf97-7e155bcc5113/ovn-northd/0.log" Jan 30 22:57:58 crc kubenswrapper[4751]: I0130 22:57:58.717091 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_47614a4a-f824-4eb4-9f46-bf1ab137d364/openstack-network-exporter/0.log" Jan 30 22:57:58 crc kubenswrapper[4751]: I0130 22:57:58.886819 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_47614a4a-f824-4eb4-9f46-bf1ab137d364/ovsdbserver-nb/0.log" Jan 30 22:57:58 crc kubenswrapper[4751]: I0130 22:57:58.886996 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1f8708be-4bf5-440d-a6e3-876acf844253/openstack-network-exporter/0.log" Jan 30 22:57:58 crc kubenswrapper[4751]: I0130 22:57:58.984151 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1f8708be-4bf5-440d-a6e3-876acf844253/ovsdbserver-sb/0.log" Jan 30 22:57:59 crc kubenswrapper[4751]: I0130 22:57:59.329915 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6bcbb59b46-2xhmj_0cb6a4c8-d098-48b5-8ffe-ff46a64bc377/placement-api/0.log" Jan 30 22:57:59 crc kubenswrapper[4751]: I0130 22:57:59.447429 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6bcbb59b46-2xhmj_0cb6a4c8-d098-48b5-8ffe-ff46a64bc377/placement-log/0.log" Jan 30 22:57:59 crc kubenswrapper[4751]: I0130 22:57:59.563484 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3e7af95c-7ba2-4e0b-9947-795d9629744c/init-config-reloader/0.log" Jan 30 22:57:59 crc kubenswrapper[4751]: I0130 22:57:59.706973 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3e7af95c-7ba2-4e0b-9947-795d9629744c/init-config-reloader/0.log" Jan 30 22:57:59 crc kubenswrapper[4751]: I0130 22:57:59.763863 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3e7af95c-7ba2-4e0b-9947-795d9629744c/config-reloader/0.log" Jan 30 22:57:59 crc kubenswrapper[4751]: I0130 22:57:59.765724 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3e7af95c-7ba2-4e0b-9947-795d9629744c/prometheus/0.log" Jan 30 22:57:59 crc kubenswrapper[4751]: I0130 22:57:59.812672 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3e7af95c-7ba2-4e0b-9947-795d9629744c/thanos-sidecar/0.log" Jan 30 22:58:00 crc kubenswrapper[4751]: I0130 22:58:00.037158 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_aa019efa-4067-4bd5-b370-12f6a4e6b856/setup-container/0.log" Jan 30 22:58:00 crc kubenswrapper[4751]: I0130 22:58:00.209081 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_aa019efa-4067-4bd5-b370-12f6a4e6b856/setup-container/0.log" Jan 30 22:58:00 crc kubenswrapper[4751]: I0130 22:58:00.310524 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_aa019efa-4067-4bd5-b370-12f6a4e6b856/rabbitmq/0.log" Jan 30 22:58:00 crc kubenswrapper[4751]: I0130 22:58:00.354233 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4ab0c22c-f078-413c-ac94-9e543a02c3fb/setup-container/0.log" Jan 30 22:58:00 crc kubenswrapper[4751]: I0130 22:58:00.623135 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4ab0c22c-f078-413c-ac94-9e543a02c3fb/rabbitmq/0.log" Jan 30 22:58:00 crc kubenswrapper[4751]: I0130 22:58:00.657806 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4ab0c22c-f078-413c-ac94-9e543a02c3fb/setup-container/0.log" Jan 30 22:58:00 crc kubenswrapper[4751]: I0130 22:58:00.711775 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_279dd57b-8f7d-4730-a9ee-cf124f8c0d52/setup-container/0.log" Jan 30 22:58:01 crc kubenswrapper[4751]: I0130 22:58:00.999756 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_279dd57b-8f7d-4730-a9ee-cf124f8c0d52/setup-container/0.log" Jan 30 22:58:01 crc kubenswrapper[4751]: I0130 22:58:01.164397 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_279dd57b-8f7d-4730-a9ee-cf124f8c0d52/rabbitmq/0.log" Jan 30 22:58:01 crc kubenswrapper[4751]: I0130 22:58:01.255576 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_29afad92-51c9-45a8-a6a0-ed64925f91f3/setup-container/0.log" Jan 30 22:58:01 crc kubenswrapper[4751]: I0130 22:58:01.646938 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_29afad92-51c9-45a8-a6a0-ed64925f91f3/setup-container/0.log" Jan 30 22:58:01 crc kubenswrapper[4751]: I0130 22:58:01.705121 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_29afad92-51c9-45a8-a6a0-ed64925f91f3/rabbitmq/0.log" Jan 30 22:58:01 crc kubenswrapper[4751]: I0130 22:58:01.841998 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp_0562f716-fdf2-41ff-bb36-5474fa9be5c0/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:58:02 crc kubenswrapper[4751]: I0130 22:58:02.038484 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-2bbbp_a4b9ecbd-4cf2-4554-b209-d7a421499f08/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:58:02 crc kubenswrapper[4751]: I0130 22:58:02.265194 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w_37b91419-687f-4907-888d-9344d1e8602a/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:58:02 crc kubenswrapper[4751]: I0130 22:58:02.309619 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-zbttg_10f27009-b34c-43f0-999f-64c2e2316013/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:58:02 crc kubenswrapper[4751]: I0130 22:58:02.593473 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-gtfdq_1c9c26ff-407a-4595-8406-e3a0d46450aa/ssh-known-hosts-edpm-deployment/0.log" Jan 30 22:58:02 crc kubenswrapper[4751]: I0130 22:58:02.901483 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58dc6df599-nmmxw_b9f02a32-18ed-4030-94d6-16f4d0feff52/proxy-server/0.log" Jan 30 22:58:02 crc kubenswrapper[4751]: I0130 22:58:02.928360 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-vvq25_70af95fb-5ca8-4482-a1bc-81b1891e0da7/swift-ring-rebalance/0.log" Jan 30 22:58:03 crc kubenswrapper[4751]: I0130 22:58:03.080022 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58dc6df599-nmmxw_b9f02a32-18ed-4030-94d6-16f4d0feff52/proxy-httpd/0.log" Jan 30 22:58:03 crc kubenswrapper[4751]: I0130 22:58:03.206096 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/account-reaper/0.log" Jan 30 22:58:03 crc kubenswrapper[4751]: I0130 22:58:03.271578 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/account-auditor/0.log" Jan 30 22:58:03 crc kubenswrapper[4751]: I0130 22:58:03.426766 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/account-server/0.log" Jan 30 22:58:03 crc kubenswrapper[4751]: I0130 22:58:03.448011 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/account-replicator/0.log" Jan 30 22:58:03 crc kubenswrapper[4751]: I0130 22:58:03.538690 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/container-auditor/0.log" Jan 30 22:58:03 crc kubenswrapper[4751]: I0130 22:58:03.609512 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/container-replicator/0.log" Jan 30 22:58:03 crc kubenswrapper[4751]: I0130 22:58:03.704690 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/container-server/0.log" Jan 30 22:58:03 crc kubenswrapper[4751]: I0130 22:58:03.763734 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/container-updater/0.log" Jan 30 22:58:03 crc kubenswrapper[4751]: I0130 22:58:03.831146 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/object-expirer/0.log" Jan 30 22:58:03 crc kubenswrapper[4751]: I0130 22:58:03.868175 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/object-auditor/0.log" Jan 30 22:58:04 crc kubenswrapper[4751]: I0130 22:58:04.018407 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/object-server/0.log" Jan 30 22:58:04 crc kubenswrapper[4751]: I0130 22:58:04.019732 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/object-replicator/0.log" Jan 30 22:58:04 crc kubenswrapper[4751]: I0130 22:58:04.102318 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/object-updater/0.log" Jan 30 22:58:04 crc kubenswrapper[4751]: I0130 22:58:04.154073 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/rsync/0.log" Jan 30 22:58:04 crc kubenswrapper[4751]: I0130 22:58:04.267919 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/swift-recon-cron/0.log" Jan 30 22:58:04 crc kubenswrapper[4751]: I0130 22:58:04.413311 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx_93c2956e-910c-4604-a9ba-86289f854a59/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:58:04 crc kubenswrapper[4751]: I0130 22:58:04.556510 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v_ac636140-8b68-474a-a7f9-7d46e6a22de0/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:58:04 crc kubenswrapper[4751]: I0130 22:58:04.810084 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_3555a827-6ba2-4057-a142-ea2818a3d76e/test-operator-logs-container/0.log" Jan 30 22:58:05 crc kubenswrapper[4751]: I0130 22:58:05.090302 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-plxr5_538f9f69-1642-4944-a5e1-7348a104c5e6/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:58:05 crc kubenswrapper[4751]: I0130 22:58:05.647647 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_053bddc4-b1a1-4951-af33-6230acd3ee0b/tempest-tests-tempest-tests-runner/0.log" Jan 30 22:58:10 crc kubenswrapper[4751]: I0130 22:58:10.718494 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_14c5f0f0-6d85-4d60-9daa-7fa3b401a884/memcached/0.log" Jan 30 22:58:35 crc kubenswrapper[4751]: I0130 22:58:35.634421 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-7mpjw_236db419-e197-4a85-ab49-58cf38babea6/manager/0.log" Jan 30 22:58:35 crc kubenswrapper[4751]: I0130 22:58:35.810911 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m_8fed4afd-9214-4ec9-816d-2ba6213f2f89/util/0.log" Jan 30 22:58:36 crc kubenswrapper[4751]: I0130 22:58:36.019926 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m_8fed4afd-9214-4ec9-816d-2ba6213f2f89/util/0.log" Jan 30 22:58:36 crc kubenswrapper[4751]: I0130 22:58:36.034086 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m_8fed4afd-9214-4ec9-816d-2ba6213f2f89/pull/0.log" Jan 30 22:58:36 crc kubenswrapper[4751]: I0130 22:58:36.070379 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m_8fed4afd-9214-4ec9-816d-2ba6213f2f89/pull/0.log" Jan 30 22:58:36 crc kubenswrapper[4751]: I0130 22:58:36.190127 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m_8fed4afd-9214-4ec9-816d-2ba6213f2f89/pull/0.log" Jan 30 22:58:36 crc kubenswrapper[4751]: I0130 22:58:36.232213 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m_8fed4afd-9214-4ec9-816d-2ba6213f2f89/util/0.log" Jan 30 22:58:36 crc kubenswrapper[4751]: I0130 22:58:36.235044 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m_8fed4afd-9214-4ec9-816d-2ba6213f2f89/extract/0.log" Jan 30 22:58:36 crc kubenswrapper[4751]: I0130 22:58:36.426449 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-6fg4r_9003ffe6-59a3-4c7c-96d0-d129a9339247/manager/0.log" Jan 30 22:58:36 crc kubenswrapper[4751]: I0130 22:58:36.492804 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-ph5lf_f8cf0eb3-a93d-4462-b5ac-bbaaebf6daf9/manager/0.log" Jan 30 22:58:36 crc kubenswrapper[4751]: I0130 22:58:36.682090 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-b65fl_0fd5051a-5be4-4336-af86-9674469b76a0/manager/0.log" Jan 30 22:58:36 crc kubenswrapper[4751]: I0130 22:58:36.807005 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-jxkmf_3fae5204-d3a1-4e39-ac3d-d28c8a55c7db/manager/0.log" Jan 30 22:58:36 crc kubenswrapper[4751]: I0130 22:58:36.903041 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-hsbbr_0b3a96d4-f5fc-47be-9c28-47239b2488c1/manager/0.log" Jan 30 22:58:37 crc kubenswrapper[4751]: I0130 22:58:37.346541 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-52vr2_2d6f1acc-6416-44ae-9082-3ebe16dce448/manager/0.log" Jan 30 22:58:37 crc kubenswrapper[4751]: I0130 22:58:37.438941 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-n2shb_9a88f139-89db-4b3a-8fea-bf951e59f564/manager/0.log" Jan 30 22:58:37 crc kubenswrapper[4751]: I0130 22:58:37.603704 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-sw6zv_b2777bff-2cca-4f41-8655-a737f13b4885/manager/0.log" Jan 30 22:58:37 crc kubenswrapper[4751]: I0130 22:58:37.661413 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-7sk5v_694b29bc-994c-4983-81c7-b32d47db553b/manager/0.log" Jan 30 22:58:37 crc kubenswrapper[4751]: I0130 22:58:37.838952 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-xk52h_1ad347ea-d2ce-4a1e-912a-8471445396f7/manager/0.log" Jan 30 22:58:37 crc kubenswrapper[4751]: I0130 22:58:37.934792 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-9vvgb_4a416a7c-3094-46ef-8370-9cad7446339b/manager/0.log" Jan 30 22:58:38 crc kubenswrapper[4751]: I0130 22:58:38.152380 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-tbp7n_e596dcc9-7f31-4312-99e3-7d86d318ef9d/manager/0.log" Jan 30 22:58:38 crc kubenswrapper[4751]: I0130 22:58:38.159264 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-d6slz_fcf49997-888f-4e58-99e7-f1f677dc7111/manager/0.log" Jan 30 22:58:38 crc kubenswrapper[4751]: I0130 22:58:38.321387 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk_0026e471-8226-4038-8c52-f0add2877c8d/manager/0.log" Jan 30 22:58:38 crc kubenswrapper[4751]: I0130 22:58:38.542233 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-55fdcd6c79-9hzxh_4b543295-a1a6-40ad-8b74-0ee6fdeb66c3/operator/0.log" Jan 30 22:58:38 crc kubenswrapper[4751]: I0130 22:58:38.798844 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-lw6gm_bd6eaa60-4995-4ace-8ab0-a880f09cbee0/registry-server/0.log" Jan 30 22:58:39 crc kubenswrapper[4751]: I0130 22:58:39.073420 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-c7tj6_c711cf07-a695-447a-8d01-147b10e9059f/manager/0.log" Jan 30 22:58:39 crc kubenswrapper[4751]: I0130 22:58:39.278788 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-dx8wk_ace28553-76bc-4472-a671-788e1fb9a1ff/manager/0.log" Jan 30 22:58:39 crc kubenswrapper[4751]: I0130 22:58:39.421116 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-v8vch_a986231c-2119-4a13-801d-51119db5d365/operator/0.log" Jan 30 22:58:39 crc kubenswrapper[4751]: I0130 22:58:39.708715 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-r6smn_0c86abfd-77a9-4388-8b7f-b61bb378f7cb/manager/0.log" Jan 30 22:58:40 crc kubenswrapper[4751]: I0130 22:58:40.128642 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-sc9gq_3d59cc79-1a37-434a-a04b-156739f469d7/manager/0.log" Jan 30 22:58:40 crc kubenswrapper[4751]: I0130 22:58:40.386205 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-gcvgx_cbae5889-938b-4211-94a6-de960df2f95d/manager/0.log" Jan 30 22:58:40 crc kubenswrapper[4751]: I0130 22:58:40.802259 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6749767b8f-62rqr_3b9cc057-30d7-4a03-8c76-a1ca7200dbae/manager/0.log" Jan 30 22:58:40 crc kubenswrapper[4751]: I0130 22:58:40.952571 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7d48698d88-jbmh6_dac6f1f3-8549-488c-bb63-aa980f4a1282/manager/0.log" Jan 30 22:58:54 crc kubenswrapper[4751]: I0130 22:58:54.127390 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:58:54 crc kubenswrapper[4751]: I0130 22:58:54.127960 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:59:01 crc kubenswrapper[4751]: I0130 22:59:01.490731 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xf2m8_357257a0-2b96-4833-84cb-1c4326c34e61/control-plane-machine-set-operator/0.log" Jan 30 22:59:01 crc kubenswrapper[4751]: I0130 22:59:01.731711 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-nk5rn_ebb4c857-4f54-440f-81d7-74eadc588099/kube-rbac-proxy/0.log" Jan 30 22:59:01 crc kubenswrapper[4751]: I0130 22:59:01.768849 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-nk5rn_ebb4c857-4f54-440f-81d7-74eadc588099/machine-api-operator/0.log" Jan 30 22:59:15 crc kubenswrapper[4751]: I0130 22:59:15.387217 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-mbzjn_9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd/cert-manager-controller/0.log" Jan 30 22:59:15 crc kubenswrapper[4751]: I0130 22:59:15.507816 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-9k9rg_04bdab63-06c1-475f-8351-a2ccc4292f25/cert-manager-cainjector/0.log" Jan 30 22:59:15 crc kubenswrapper[4751]: I0130 22:59:15.617607 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-shbmk_9acdc588-bef3-4ce2-bf06-afea86273408/cert-manager-webhook/0.log" Jan 30 22:59:24 crc kubenswrapper[4751]: I0130 22:59:24.126910 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:59:24 crc kubenswrapper[4751]: I0130 22:59:24.127434 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:59:28 crc kubenswrapper[4751]: I0130 22:59:28.373215 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-kxkfz_2806dd41-f23b-466a-a187-4689685f6b86/nmstate-console-plugin/0.log" Jan 30 22:59:28 crc kubenswrapper[4751]: I0130 22:59:28.559870 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-d95cp_eea5deed-9d07-45b2-b400-64b7c2336994/nmstate-handler/0.log" Jan 30 22:59:28 crc kubenswrapper[4751]: I0130 22:59:28.664914 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-rfrtx_f0ccd951-df7f-452f-b340-64fa7c9f9916/kube-rbac-proxy/0.log" Jan 30 22:59:28 crc kubenswrapper[4751]: I0130 22:59:28.734692 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-rfrtx_f0ccd951-df7f-452f-b340-64fa7c9f9916/nmstate-metrics/0.log" Jan 30 22:59:28 crc kubenswrapper[4751]: I0130 22:59:28.823554 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-k49vc_c9f603b5-de3a-4d5e-acc1-6da32a99dcaa/nmstate-operator/0.log" Jan 30 22:59:28 crc kubenswrapper[4751]: I0130 22:59:28.913230 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-7hqmv_be191f8d-d8ce-4f29-95f1-1278c108ca11/nmstate-webhook/0.log" Jan 30 22:59:41 crc kubenswrapper[4751]: I0130 22:59:41.112796 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7988bf4897-spq9h_d32a4de7-a9b5-408d-b678-bcc0244cceee/kube-rbac-proxy/0.log" Jan 30 22:59:41 crc kubenswrapper[4751]: I0130 22:59:41.142420 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7988bf4897-spq9h_d32a4de7-a9b5-408d-b678-bcc0244cceee/manager/0.log" Jan 30 22:59:53 crc kubenswrapper[4751]: I0130 22:59:53.954536 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-5nv4n_96f3e554-fbfc-4716-b6ee-0913394521fa/prometheus-operator/0.log" Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.126827 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.126994 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.127075 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.128425 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.128520 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" gracePeriod=600 Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.195790 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-858b879-n4cw4_c0edc270-3913-41f7-9218-32549d1d3dea/prometheus-operator-admission-webhook/0.log" Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.281184 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-858b879-vpng2_16999302-ac18-4e1c-b3f7-a2bf3f7605aa/prometheus-operator-admission-webhook/0.log" Jan 30 22:59:54 crc kubenswrapper[4751]: E0130 22:59:54.356978 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.510906 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-p97jc_0d7cf074-b623-45d0-ac84-c1e52a626885/observability-ui-dashboards/0.log" Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.521896 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" exitCode=0 Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.521941 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b"} Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.521975 4751 scope.go:117] "RemoveContainer" containerID="1f994498b8705c718253f1d686dfa142a31e491cf05bc7e00a9d3f4b2c57ea67" Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.522883 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 22:59:54 crc kubenswrapper[4751]: E0130 22:59:54.523209 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.541229 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-lhkl2_3ee6b659-c8c9-4f07-a897-c69db812f880/operator/0.log" Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.720990 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-l498d_7472790e-3a0e-40dd-909c-4301ba84d884/perses-operator/0.log" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.254780 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z"] Jan 30 23:00:00 crc kubenswrapper[4751]: E0130 23:00:00.255845 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e60a3a88-2975-458f-a5e9-422a6c519f65" containerName="container-00" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.255861 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e60a3a88-2975-458f-a5e9-422a6c519f65" containerName="container-00" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.256100 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e60a3a88-2975-458f-a5e9-422a6c519f65" containerName="container-00" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.260724 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.265742 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.269046 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.284695 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z"] Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.363216 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-secret-volume\") pod \"collect-profiles-29496900-whf7z\" (UID: \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.363462 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpsph\" (UniqueName: \"kubernetes.io/projected/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-kube-api-access-wpsph\") pod \"collect-profiles-29496900-whf7z\" (UID: \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.363490 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-config-volume\") pod \"collect-profiles-29496900-whf7z\" (UID: \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.465755 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-secret-volume\") pod \"collect-profiles-29496900-whf7z\" (UID: \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.466037 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpsph\" (UniqueName: \"kubernetes.io/projected/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-kube-api-access-wpsph\") pod \"collect-profiles-29496900-whf7z\" (UID: \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.466589 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-config-volume\") pod \"collect-profiles-29496900-whf7z\" (UID: \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.467711 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-config-volume\") pod \"collect-profiles-29496900-whf7z\" (UID: \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.508296 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpsph\" (UniqueName: \"kubernetes.io/projected/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-kube-api-access-wpsph\") pod \"collect-profiles-29496900-whf7z\" (UID: \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.508922 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-secret-volume\") pod \"collect-profiles-29496900-whf7z\" (UID: \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.583913 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" Jan 30 23:00:01 crc kubenswrapper[4751]: I0130 23:00:01.745027 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z"] Jan 30 23:00:02 crc kubenswrapper[4751]: I0130 23:00:02.616097 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" event={"ID":"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d","Type":"ContainerStarted","Data":"3c940c159a2bba2aeb4ad2d7c11d57a3d87f27896ed07f7333b30e4d2c0c80be"} Jan 30 23:00:02 crc kubenswrapper[4751]: I0130 23:00:02.617618 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" event={"ID":"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d","Type":"ContainerStarted","Data":"130d9c001698551e1af4f1787b7af891f92d385e4e0bd3b2ad037906d0073a05"} Jan 30 23:00:02 crc kubenswrapper[4751]: I0130 23:00:02.636754 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" podStartSLOduration=2.636731638 podStartE2EDuration="2.636731638s" podCreationTimestamp="2026-01-30 23:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:00:02.634715473 +0000 UTC m=+6341.380538132" watchObservedRunningTime="2026-01-30 23:00:02.636731638 +0000 UTC m=+6341.382554287" Jan 30 23:00:03 crc kubenswrapper[4751]: I0130 23:00:03.627493 4751 generic.go:334] "Generic (PLEG): container finished" podID="797d18c7-90e7-4a29-b4bd-c8ad9148ea0d" containerID="3c940c159a2bba2aeb4ad2d7c11d57a3d87f27896ed07f7333b30e4d2c0c80be" exitCode=0 Jan 30 23:00:03 crc kubenswrapper[4751]: I0130 23:00:03.627593 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" event={"ID":"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d","Type":"ContainerDied","Data":"3c940c159a2bba2aeb4ad2d7c11d57a3d87f27896ed07f7333b30e4d2c0c80be"} Jan 30 23:00:05 crc kubenswrapper[4751]: I0130 23:00:05.108159 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" Jan 30 23:00:05 crc kubenswrapper[4751]: I0130 23:00:05.298916 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpsph\" (UniqueName: \"kubernetes.io/projected/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-kube-api-access-wpsph\") pod \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\" (UID: \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\") " Jan 30 23:00:05 crc kubenswrapper[4751]: I0130 23:00:05.299002 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-secret-volume\") pod \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\" (UID: \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\") " Jan 30 23:00:05 crc kubenswrapper[4751]: I0130 23:00:05.299156 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-config-volume\") pod \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\" (UID: \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\") " Jan 30 23:00:05 crc kubenswrapper[4751]: I0130 23:00:05.299972 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-config-volume" (OuterVolumeSpecName: "config-volume") pod "797d18c7-90e7-4a29-b4bd-c8ad9148ea0d" (UID: "797d18c7-90e7-4a29-b4bd-c8ad9148ea0d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:00:05 crc kubenswrapper[4751]: I0130 23:00:05.306955 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "797d18c7-90e7-4a29-b4bd-c8ad9148ea0d" (UID: "797d18c7-90e7-4a29-b4bd-c8ad9148ea0d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:00:05 crc kubenswrapper[4751]: I0130 23:00:05.306972 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-kube-api-access-wpsph" (OuterVolumeSpecName: "kube-api-access-wpsph") pod "797d18c7-90e7-4a29-b4bd-c8ad9148ea0d" (UID: "797d18c7-90e7-4a29-b4bd-c8ad9148ea0d"). InnerVolumeSpecName "kube-api-access-wpsph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:00:05 crc kubenswrapper[4751]: I0130 23:00:05.402713 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpsph\" (UniqueName: \"kubernetes.io/projected/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-kube-api-access-wpsph\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:05 crc kubenswrapper[4751]: I0130 23:00:05.402754 4751 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:05 crc kubenswrapper[4751]: I0130 23:00:05.402778 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:05 crc kubenswrapper[4751]: I0130 23:00:05.661502 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" event={"ID":"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d","Type":"ContainerDied","Data":"130d9c001698551e1af4f1787b7af891f92d385e4e0bd3b2ad037906d0073a05"} Jan 30 23:00:05 crc kubenswrapper[4751]: I0130 23:00:05.661549 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="130d9c001698551e1af4f1787b7af891f92d385e4e0bd3b2ad037906d0073a05" Jan 30 23:00:05 crc kubenswrapper[4751]: I0130 23:00:05.661552 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" Jan 30 23:00:06 crc kubenswrapper[4751]: I0130 23:00:06.217542 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m"] Jan 30 23:00:06 crc kubenswrapper[4751]: I0130 23:00:06.229133 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m"] Jan 30 23:00:06 crc kubenswrapper[4751]: I0130 23:00:06.976810 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:00:06 crc kubenswrapper[4751]: E0130 23:00:06.977205 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:00:07 crc kubenswrapper[4751]: I0130 23:00:07.990800 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f9671fd-4ee5-4071-8dd4-86a335928d79" path="/var/lib/kubelet/pods/3f9671fd-4ee5-4071-8dd4-86a335928d79/volumes" Jan 30 23:00:10 crc kubenswrapper[4751]: I0130 23:00:10.322433 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-79cf69ddc8-tg4r2_c60111a8-d193-4bbb-af4b-a5f286a4b04b/cluster-logging-operator/0.log" Jan 30 23:00:10 crc kubenswrapper[4751]: I0130 23:00:10.538312 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-f6llv_d1f22c66-daa2-4dd7-8394-ceab983464e2/collector/0.log" Jan 30 23:00:10 crc kubenswrapper[4751]: I0130 23:00:10.621127 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76/loki-compactor/0.log" Jan 30 23:00:10 crc kubenswrapper[4751]: I0130 23:00:10.775547 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5f678c8dd6-mc9wc_d066c155-02e0-448e-9d4c-f578a36e553b/loki-distributor/0.log" Jan 30 23:00:10 crc kubenswrapper[4751]: I0130 23:00:10.809114 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f4fcfb764-r5mfq_653268f5-1827-4109-a68b-3cc7670e65f8/gateway/0.log" Jan 30 23:00:10 crc kubenswrapper[4751]: I0130 23:00:10.994825 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f4fcfb764-r5mfq_653268f5-1827-4109-a68b-3cc7670e65f8/opa/0.log" Jan 30 23:00:11 crc kubenswrapper[4751]: I0130 23:00:11.018626 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f4fcfb764-rbqpr_326140a4-6f2a-48c1-b5a2-0b02ce345c50/gateway/0.log" Jan 30 23:00:11 crc kubenswrapper[4751]: I0130 23:00:11.051100 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f4fcfb764-rbqpr_326140a4-6f2a-48c1-b5a2-0b02ce345c50/opa/0.log" Jan 30 23:00:11 crc kubenswrapper[4751]: I0130 23:00:11.180711 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac/loki-index-gateway/0.log" Jan 30 23:00:11 crc kubenswrapper[4751]: I0130 23:00:11.319187 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_4f247b61-4ba2-4c4e-8d97-c16900635ddc/loki-ingester/0.log" Jan 30 23:00:11 crc kubenswrapper[4751]: I0130 23:00:11.437718 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76788598db-gbf6p_096a86f8-72dc-4bd5-a2b4-48b67a26d792/loki-querier/0.log" Jan 30 23:00:11 crc kubenswrapper[4751]: I0130 23:00:11.552725 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-69d9546745-7cdp9_8083b036-5700-420a-ad3f-1e471813194e/loki-query-frontend/0.log" Jan 30 23:00:19 crc kubenswrapper[4751]: I0130 23:00:19.975728 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:00:19 crc kubenswrapper[4751]: E0130 23:00:19.976580 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:00:25 crc kubenswrapper[4751]: I0130 23:00:25.964785 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-p8nst_41e79790-830a-48bb-93b6-dd55dc050acf/kube-rbac-proxy/0.log" Jan 30 23:00:26 crc kubenswrapper[4751]: I0130 23:00:26.193117 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-p8nst_41e79790-830a-48bb-93b6-dd55dc050acf/controller/0.log" Jan 30 23:00:26 crc kubenswrapper[4751]: I0130 23:00:26.210935 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/cp-frr-files/0.log" Jan 30 23:00:26 crc kubenswrapper[4751]: I0130 23:00:26.478213 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/cp-frr-files/0.log" Jan 30 23:00:26 crc kubenswrapper[4751]: I0130 23:00:26.500847 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/cp-reloader/0.log" Jan 30 23:00:26 crc kubenswrapper[4751]: I0130 23:00:26.539342 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/cp-metrics/0.log" Jan 30 23:00:26 crc kubenswrapper[4751]: I0130 23:00:26.542919 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/cp-reloader/0.log" Jan 30 23:00:26 crc kubenswrapper[4751]: I0130 23:00:26.701407 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/cp-reloader/0.log" Jan 30 23:00:26 crc kubenswrapper[4751]: I0130 23:00:26.750243 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/cp-frr-files/0.log" Jan 30 23:00:26 crc kubenswrapper[4751]: I0130 23:00:26.782544 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/cp-metrics/0.log" Jan 30 23:00:26 crc kubenswrapper[4751]: I0130 23:00:26.812789 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/cp-metrics/0.log" Jan 30 23:00:26 crc kubenswrapper[4751]: I0130 23:00:26.996547 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/cp-frr-files/0.log" Jan 30 23:00:27 crc kubenswrapper[4751]: I0130 23:00:27.063165 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/cp-reloader/0.log" Jan 30 23:00:27 crc kubenswrapper[4751]: I0130 23:00:27.075995 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/controller/0.log" Jan 30 23:00:27 crc kubenswrapper[4751]: I0130 23:00:27.087170 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/cp-metrics/0.log" Jan 30 23:00:27 crc kubenswrapper[4751]: I0130 23:00:27.276645 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/frr-metrics/0.log" Jan 30 23:00:27 crc kubenswrapper[4751]: I0130 23:00:27.341207 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/kube-rbac-proxy-frr/0.log" Jan 30 23:00:27 crc kubenswrapper[4751]: I0130 23:00:27.386677 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/kube-rbac-proxy/0.log" Jan 30 23:00:27 crc kubenswrapper[4751]: I0130 23:00:27.500793 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/reloader/0.log" Jan 30 23:00:27 crc kubenswrapper[4751]: I0130 23:00:27.635199 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-2zl97_f8544a86-1b67-4c2e-9b56-ca708c47b4e8/frr-k8s-webhook-server/0.log" Jan 30 23:00:27 crc kubenswrapper[4751]: I0130 23:00:27.989538 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6697664f96-w8tr4_088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a/manager/0.log" Jan 30 23:00:28 crc kubenswrapper[4751]: I0130 23:00:28.118354 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-597477f4b5-q868h_61545af5-1133-4922-a477-9155212b642c/webhook-server/0.log" Jan 30 23:00:28 crc kubenswrapper[4751]: I0130 23:00:28.389664 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-zqbmp_e9fc7f0b-0bab-4435-82d8-b78841d64687/kube-rbac-proxy/0.log" Jan 30 23:00:29 crc kubenswrapper[4751]: I0130 23:00:29.429855 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/frr/0.log" Jan 30 23:00:29 crc kubenswrapper[4751]: I0130 23:00:29.802605 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-zqbmp_e9fc7f0b-0bab-4435-82d8-b78841d64687/speaker/0.log" Jan 30 23:00:30 crc kubenswrapper[4751]: I0130 23:00:30.351214 4751 scope.go:117] "RemoveContainer" containerID="17631e0b0228d44951b111801652ba8aead8eab296a100d22a49d18b40b57ded" Jan 30 23:00:34 crc kubenswrapper[4751]: I0130 23:00:34.976141 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:00:34 crc kubenswrapper[4751]: E0130 23:00:34.977044 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:00:42 crc kubenswrapper[4751]: I0130 23:00:42.474500 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw_00263593-80af-4a40-a2c4-538f582434c4/util/0.log" Jan 30 23:00:42 crc kubenswrapper[4751]: I0130 23:00:42.607455 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw_00263593-80af-4a40-a2c4-538f582434c4/util/0.log" Jan 30 23:00:42 crc kubenswrapper[4751]: I0130 23:00:42.770107 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw_00263593-80af-4a40-a2c4-538f582434c4/pull/0.log" Jan 30 23:00:42 crc kubenswrapper[4751]: I0130 23:00:42.802172 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw_00263593-80af-4a40-a2c4-538f582434c4/pull/0.log" Jan 30 23:00:42 crc kubenswrapper[4751]: I0130 23:00:42.880958 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw_00263593-80af-4a40-a2c4-538f582434c4/pull/0.log" Jan 30 23:00:42 crc kubenswrapper[4751]: I0130 23:00:42.947737 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw_00263593-80af-4a40-a2c4-538f582434c4/util/0.log" Jan 30 23:00:42 crc kubenswrapper[4751]: I0130 23:00:42.989606 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw_00263593-80af-4a40-a2c4-538f582434c4/extract/0.log" Jan 30 23:00:43 crc kubenswrapper[4751]: I0130 23:00:43.151470 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh_eac36070-4c04-460f-bfbb-e77659bad07e/util/0.log" Jan 30 23:00:43 crc kubenswrapper[4751]: I0130 23:00:43.388405 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh_eac36070-4c04-460f-bfbb-e77659bad07e/pull/0.log" Jan 30 23:00:43 crc kubenswrapper[4751]: I0130 23:00:43.388674 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh_eac36070-4c04-460f-bfbb-e77659bad07e/pull/0.log" Jan 30 23:00:43 crc kubenswrapper[4751]: I0130 23:00:43.404989 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh_eac36070-4c04-460f-bfbb-e77659bad07e/util/0.log" Jan 30 23:00:43 crc kubenswrapper[4751]: I0130 23:00:43.614955 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh_eac36070-4c04-460f-bfbb-e77659bad07e/util/0.log" Jan 30 23:00:43 crc kubenswrapper[4751]: I0130 23:00:43.638631 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh_eac36070-4c04-460f-bfbb-e77659bad07e/extract/0.log" Jan 30 23:00:43 crc kubenswrapper[4751]: I0130 23:00:43.647975 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh_eac36070-4c04-460f-bfbb-e77659bad07e/pull/0.log" Jan 30 23:00:43 crc kubenswrapper[4751]: I0130 23:00:43.856747 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pscx6_fd9d691f-2785-4248-80d8-903f36ff7f1f/extract-utilities/0.log" Jan 30 23:00:44 crc kubenswrapper[4751]: I0130 23:00:44.018405 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pscx6_fd9d691f-2785-4248-80d8-903f36ff7f1f/extract-utilities/0.log" Jan 30 23:00:44 crc kubenswrapper[4751]: I0130 23:00:44.062388 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pscx6_fd9d691f-2785-4248-80d8-903f36ff7f1f/extract-content/0.log" Jan 30 23:00:44 crc kubenswrapper[4751]: I0130 23:00:44.135288 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pscx6_fd9d691f-2785-4248-80d8-903f36ff7f1f/extract-content/0.log" Jan 30 23:00:44 crc kubenswrapper[4751]: I0130 23:00:44.263987 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pscx6_fd9d691f-2785-4248-80d8-903f36ff7f1f/extract-utilities/0.log" Jan 30 23:00:44 crc kubenswrapper[4751]: I0130 23:00:44.335925 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pscx6_fd9d691f-2785-4248-80d8-903f36ff7f1f/extract-content/0.log" Jan 30 23:00:44 crc kubenswrapper[4751]: I0130 23:00:44.546612 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvqpq_f675e6ee-15d0-4fa7-94ec-c08976e45a20/extract-utilities/0.log" Jan 30 23:00:44 crc kubenswrapper[4751]: I0130 23:00:44.561874 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pscx6_fd9d691f-2785-4248-80d8-903f36ff7f1f/registry-server/0.log" Jan 30 23:00:44 crc kubenswrapper[4751]: I0130 23:00:44.778787 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvqpq_f675e6ee-15d0-4fa7-94ec-c08976e45a20/extract-utilities/0.log" Jan 30 23:00:44 crc kubenswrapper[4751]: I0130 23:00:44.803969 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvqpq_f675e6ee-15d0-4fa7-94ec-c08976e45a20/extract-content/0.log" Jan 30 23:00:44 crc kubenswrapper[4751]: I0130 23:00:44.821717 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvqpq_f675e6ee-15d0-4fa7-94ec-c08976e45a20/extract-content/0.log" Jan 30 23:00:45 crc kubenswrapper[4751]: I0130 23:00:45.032223 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvqpq_f675e6ee-15d0-4fa7-94ec-c08976e45a20/extract-content/0.log" Jan 30 23:00:45 crc kubenswrapper[4751]: I0130 23:00:45.041143 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvqpq_f675e6ee-15d0-4fa7-94ec-c08976e45a20/extract-utilities/0.log" Jan 30 23:00:45 crc kubenswrapper[4751]: I0130 23:00:45.231279 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-s9tfl_7804f857-fb14-4305-97cc-c966621a55b2/marketplace-operator/0.log" Jan 30 23:00:45 crc kubenswrapper[4751]: I0130 23:00:45.387890 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n5g7x_b187a442-317c-42c9-ba1a-ff41e0b9bc90/extract-utilities/0.log" Jan 30 23:00:45 crc kubenswrapper[4751]: I0130 23:00:45.565398 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n5g7x_b187a442-317c-42c9-ba1a-ff41e0b9bc90/extract-utilities/0.log" Jan 30 23:00:45 crc kubenswrapper[4751]: I0130 23:00:45.737664 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n5g7x_b187a442-317c-42c9-ba1a-ff41e0b9bc90/extract-content/0.log" Jan 30 23:00:45 crc kubenswrapper[4751]: I0130 23:00:45.737854 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n5g7x_b187a442-317c-42c9-ba1a-ff41e0b9bc90/extract-content/0.log" Jan 30 23:00:45 crc kubenswrapper[4751]: I0130 23:00:45.975755 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:00:45 crc kubenswrapper[4751]: E0130 23:00:45.976128 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:00:45 crc kubenswrapper[4751]: I0130 23:00:45.987868 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvqpq_f675e6ee-15d0-4fa7-94ec-c08976e45a20/registry-server/0.log" Jan 30 23:00:46 crc kubenswrapper[4751]: I0130 23:00:46.007306 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n5g7x_b187a442-317c-42c9-ba1a-ff41e0b9bc90/extract-content/0.log" Jan 30 23:00:46 crc kubenswrapper[4751]: I0130 23:00:46.007691 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n5g7x_b187a442-317c-42c9-ba1a-ff41e0b9bc90/extract-utilities/0.log" Jan 30 23:00:46 crc kubenswrapper[4751]: I0130 23:00:46.244450 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n5g7x_b187a442-317c-42c9-ba1a-ff41e0b9bc90/registry-server/0.log" Jan 30 23:00:46 crc kubenswrapper[4751]: I0130 23:00:46.275022 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zps7r_fac62ab3-6625-4680-a70b-235f054baa64/extract-utilities/0.log" Jan 30 23:00:46 crc kubenswrapper[4751]: I0130 23:00:46.492554 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zps7r_fac62ab3-6625-4680-a70b-235f054baa64/extract-utilities/0.log" Jan 30 23:00:46 crc kubenswrapper[4751]: I0130 23:00:46.493884 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zps7r_fac62ab3-6625-4680-a70b-235f054baa64/extract-content/0.log" Jan 30 23:00:46 crc kubenswrapper[4751]: I0130 23:00:46.545178 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zps7r_fac62ab3-6625-4680-a70b-235f054baa64/extract-content/0.log" Jan 30 23:00:46 crc kubenswrapper[4751]: I0130 23:00:46.718336 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zps7r_fac62ab3-6625-4680-a70b-235f054baa64/extract-content/0.log" Jan 30 23:00:46 crc kubenswrapper[4751]: I0130 23:00:46.779523 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zps7r_fac62ab3-6625-4680-a70b-235f054baa64/extract-utilities/0.log" Jan 30 23:00:47 crc kubenswrapper[4751]: I0130 23:00:47.798212 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zps7r_fac62ab3-6625-4680-a70b-235f054baa64/registry-server/0.log" Jan 30 23:00:57 crc kubenswrapper[4751]: I0130 23:00:57.982642 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:00:57 crc kubenswrapper[4751]: E0130 23:00:57.983886 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.166144 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29496901-zql87"] Jan 30 23:01:00 crc kubenswrapper[4751]: E0130 23:01:00.168606 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="797d18c7-90e7-4a29-b4bd-c8ad9148ea0d" containerName="collect-profiles" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.168742 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="797d18c7-90e7-4a29-b4bd-c8ad9148ea0d" containerName="collect-profiles" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.169119 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="797d18c7-90e7-4a29-b4bd-c8ad9148ea0d" containerName="collect-profiles" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.170313 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.192709 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496901-zql87"] Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.297539 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-combined-ca-bundle\") pod \"keystone-cron-29496901-zql87\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.297589 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-fernet-keys\") pod \"keystone-cron-29496901-zql87\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.297687 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-config-data\") pod \"keystone-cron-29496901-zql87\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.297778 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8fwz\" (UniqueName: \"kubernetes.io/projected/a91608ea-b09c-4747-9249-51a7aa22de08-kube-api-access-r8fwz\") pod \"keystone-cron-29496901-zql87\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.400410 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8fwz\" (UniqueName: \"kubernetes.io/projected/a91608ea-b09c-4747-9249-51a7aa22de08-kube-api-access-r8fwz\") pod \"keystone-cron-29496901-zql87\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.400530 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-combined-ca-bundle\") pod \"keystone-cron-29496901-zql87\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.400565 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-fernet-keys\") pod \"keystone-cron-29496901-zql87\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.401661 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-config-data\") pod \"keystone-cron-29496901-zql87\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.408426 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-combined-ca-bundle\") pod \"keystone-cron-29496901-zql87\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.423617 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-fernet-keys\") pod \"keystone-cron-29496901-zql87\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.424299 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8fwz\" (UniqueName: \"kubernetes.io/projected/a91608ea-b09c-4747-9249-51a7aa22de08-kube-api-access-r8fwz\") pod \"keystone-cron-29496901-zql87\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.424486 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-config-data\") pod \"keystone-cron-29496901-zql87\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.495238 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.524152 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-858b879-vpng2_16999302-ac18-4e1c-b3f7-a2bf3f7605aa/prometheus-operator-admission-webhook/0.log" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.571297 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-858b879-n4cw4_c0edc270-3913-41f7-9218-32549d1d3dea/prometheus-operator-admission-webhook/0.log" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.594111 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-5nv4n_96f3e554-fbfc-4716-b6ee-0913394521fa/prometheus-operator/0.log" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.788492 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-p97jc_0d7cf074-b623-45d0-ac84-c1e52a626885/observability-ui-dashboards/0.log" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.825016 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-lhkl2_3ee6b659-c8c9-4f07-a897-c69db812f880/operator/0.log" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.836438 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-l498d_7472790e-3a0e-40dd-909c-4301ba84d884/perses-operator/0.log" Jan 30 23:01:01 crc kubenswrapper[4751]: I0130 23:01:01.301868 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496901-zql87"] Jan 30 23:01:02 crc kubenswrapper[4751]: I0130 23:01:02.297211 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496901-zql87" event={"ID":"a91608ea-b09c-4747-9249-51a7aa22de08","Type":"ContainerStarted","Data":"6ad35f97dd21fbfc1789403681e18f795425647b698999dcbb5b5807329516f6"} Jan 30 23:01:02 crc kubenswrapper[4751]: I0130 23:01:02.297881 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496901-zql87" event={"ID":"a91608ea-b09c-4747-9249-51a7aa22de08","Type":"ContainerStarted","Data":"60c2b303ea32ec95d9debf697aeaa801f1a06e41659995e958951d8e46f97e86"} Jan 30 23:01:02 crc kubenswrapper[4751]: I0130 23:01:02.321642 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29496901-zql87" podStartSLOduration=2.321622795 podStartE2EDuration="2.321622795s" podCreationTimestamp="2026-01-30 23:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:01:02.310244694 +0000 UTC m=+6401.056067343" watchObservedRunningTime="2026-01-30 23:01:02.321622795 +0000 UTC m=+6401.067445444" Jan 30 23:01:06 crc kubenswrapper[4751]: I0130 23:01:06.340821 4751 generic.go:334] "Generic (PLEG): container finished" podID="a91608ea-b09c-4747-9249-51a7aa22de08" containerID="6ad35f97dd21fbfc1789403681e18f795425647b698999dcbb5b5807329516f6" exitCode=0 Jan 30 23:01:06 crc kubenswrapper[4751]: I0130 23:01:06.340888 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496901-zql87" event={"ID":"a91608ea-b09c-4747-9249-51a7aa22de08","Type":"ContainerDied","Data":"6ad35f97dd21fbfc1789403681e18f795425647b698999dcbb5b5807329516f6"} Jan 30 23:01:07 crc kubenswrapper[4751]: I0130 23:01:07.807062 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:07 crc kubenswrapper[4751]: I0130 23:01:07.886644 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-combined-ca-bundle\") pod \"a91608ea-b09c-4747-9249-51a7aa22de08\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " Jan 30 23:01:07 crc kubenswrapper[4751]: I0130 23:01:07.886714 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-config-data\") pod \"a91608ea-b09c-4747-9249-51a7aa22de08\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " Jan 30 23:01:07 crc kubenswrapper[4751]: I0130 23:01:07.886754 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8fwz\" (UniqueName: \"kubernetes.io/projected/a91608ea-b09c-4747-9249-51a7aa22de08-kube-api-access-r8fwz\") pod \"a91608ea-b09c-4747-9249-51a7aa22de08\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " Jan 30 23:01:07 crc kubenswrapper[4751]: I0130 23:01:07.887015 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-fernet-keys\") pod \"a91608ea-b09c-4747-9249-51a7aa22de08\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " Jan 30 23:01:07 crc kubenswrapper[4751]: I0130 23:01:07.925479 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a91608ea-b09c-4747-9249-51a7aa22de08" (UID: "a91608ea-b09c-4747-9249-51a7aa22de08"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:01:07 crc kubenswrapper[4751]: I0130 23:01:07.939598 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a91608ea-b09c-4747-9249-51a7aa22de08-kube-api-access-r8fwz" (OuterVolumeSpecName: "kube-api-access-r8fwz") pod "a91608ea-b09c-4747-9249-51a7aa22de08" (UID: "a91608ea-b09c-4747-9249-51a7aa22de08"). InnerVolumeSpecName "kube-api-access-r8fwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:01:07 crc kubenswrapper[4751]: I0130 23:01:07.994392 4751 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:07 crc kubenswrapper[4751]: I0130 23:01:07.994662 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8fwz\" (UniqueName: \"kubernetes.io/projected/a91608ea-b09c-4747-9249-51a7aa22de08-kube-api-access-r8fwz\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:08 crc kubenswrapper[4751]: I0130 23:01:08.010077 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a91608ea-b09c-4747-9249-51a7aa22de08" (UID: "a91608ea-b09c-4747-9249-51a7aa22de08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:01:08 crc kubenswrapper[4751]: I0130 23:01:08.062130 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-config-data" (OuterVolumeSpecName: "config-data") pod "a91608ea-b09c-4747-9249-51a7aa22de08" (UID: "a91608ea-b09c-4747-9249-51a7aa22de08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:01:08 crc kubenswrapper[4751]: I0130 23:01:08.096738 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:08 crc kubenswrapper[4751]: I0130 23:01:08.096770 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:08 crc kubenswrapper[4751]: I0130 23:01:08.367198 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496901-zql87" event={"ID":"a91608ea-b09c-4747-9249-51a7aa22de08","Type":"ContainerDied","Data":"60c2b303ea32ec95d9debf697aeaa801f1a06e41659995e958951d8e46f97e86"} Jan 30 23:01:08 crc kubenswrapper[4751]: I0130 23:01:08.367236 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60c2b303ea32ec95d9debf697aeaa801f1a06e41659995e958951d8e46f97e86" Jan 30 23:01:08 crc kubenswrapper[4751]: I0130 23:01:08.367262 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:10 crc kubenswrapper[4751]: I0130 23:01:10.976301 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:01:10 crc kubenswrapper[4751]: E0130 23:01:10.977127 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:01:14 crc kubenswrapper[4751]: I0130 23:01:14.836212 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7988bf4897-spq9h_d32a4de7-a9b5-408d-b678-bcc0244cceee/kube-rbac-proxy/0.log" Jan 30 23:01:14 crc kubenswrapper[4751]: I0130 23:01:14.936631 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7988bf4897-spq9h_d32a4de7-a9b5-408d-b678-bcc0244cceee/manager/0.log" Jan 30 23:01:23 crc kubenswrapper[4751]: I0130 23:01:23.975871 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:01:23 crc kubenswrapper[4751]: E0130 23:01:23.976610 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.093581 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pzbtf"] Jan 30 23:01:34 crc kubenswrapper[4751]: E0130 23:01:34.095265 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a91608ea-b09c-4747-9249-51a7aa22de08" containerName="keystone-cron" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.095288 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a91608ea-b09c-4747-9249-51a7aa22de08" containerName="keystone-cron" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.095668 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a91608ea-b09c-4747-9249-51a7aa22de08" containerName="keystone-cron" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.097596 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.154404 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pzbtf"] Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.248398 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz9rq\" (UniqueName: \"kubernetes.io/projected/66a67eb5-04b0-4bcd-814d-e59031703d25-kube-api-access-hz9rq\") pod \"community-operators-pzbtf\" (UID: \"66a67eb5-04b0-4bcd-814d-e59031703d25\") " pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.248462 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a67eb5-04b0-4bcd-814d-e59031703d25-catalog-content\") pod \"community-operators-pzbtf\" (UID: \"66a67eb5-04b0-4bcd-814d-e59031703d25\") " pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.248542 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a67eb5-04b0-4bcd-814d-e59031703d25-utilities\") pod \"community-operators-pzbtf\" (UID: \"66a67eb5-04b0-4bcd-814d-e59031703d25\") " pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.350085 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a67eb5-04b0-4bcd-814d-e59031703d25-utilities\") pod \"community-operators-pzbtf\" (UID: \"66a67eb5-04b0-4bcd-814d-e59031703d25\") " pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.350269 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz9rq\" (UniqueName: \"kubernetes.io/projected/66a67eb5-04b0-4bcd-814d-e59031703d25-kube-api-access-hz9rq\") pod \"community-operators-pzbtf\" (UID: \"66a67eb5-04b0-4bcd-814d-e59031703d25\") " pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.350306 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a67eb5-04b0-4bcd-814d-e59031703d25-catalog-content\") pod \"community-operators-pzbtf\" (UID: \"66a67eb5-04b0-4bcd-814d-e59031703d25\") " pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.351460 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a67eb5-04b0-4bcd-814d-e59031703d25-utilities\") pod \"community-operators-pzbtf\" (UID: \"66a67eb5-04b0-4bcd-814d-e59031703d25\") " pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.351486 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a67eb5-04b0-4bcd-814d-e59031703d25-catalog-content\") pod \"community-operators-pzbtf\" (UID: \"66a67eb5-04b0-4bcd-814d-e59031703d25\") " pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.375957 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz9rq\" (UniqueName: \"kubernetes.io/projected/66a67eb5-04b0-4bcd-814d-e59031703d25-kube-api-access-hz9rq\") pod \"community-operators-pzbtf\" (UID: \"66a67eb5-04b0-4bcd-814d-e59031703d25\") " pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.445603 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:35 crc kubenswrapper[4751]: I0130 23:01:35.288995 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pzbtf"] Jan 30 23:01:35 crc kubenswrapper[4751]: I0130 23:01:35.655612 4751 generic.go:334] "Generic (PLEG): container finished" podID="66a67eb5-04b0-4bcd-814d-e59031703d25" containerID="3ac9deb2c6302801541213cb3eea491c1284f486d5a6d1433bc5d56708c0acea" exitCode=0 Jan 30 23:01:35 crc kubenswrapper[4751]: I0130 23:01:35.655661 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzbtf" event={"ID":"66a67eb5-04b0-4bcd-814d-e59031703d25","Type":"ContainerDied","Data":"3ac9deb2c6302801541213cb3eea491c1284f486d5a6d1433bc5d56708c0acea"} Jan 30 23:01:35 crc kubenswrapper[4751]: I0130 23:01:35.655720 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzbtf" event={"ID":"66a67eb5-04b0-4bcd-814d-e59031703d25","Type":"ContainerStarted","Data":"6ec1a8fb44f2780198407768eba5cde4184587063cd63e961da2205d4fa2e8c7"} Jan 30 23:01:35 crc kubenswrapper[4751]: I0130 23:01:35.976280 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:01:35 crc kubenswrapper[4751]: E0130 23:01:35.977792 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:01:37 crc kubenswrapper[4751]: I0130 23:01:37.679835 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzbtf" event={"ID":"66a67eb5-04b0-4bcd-814d-e59031703d25","Type":"ContainerStarted","Data":"6aca60af051abbfa2e99b1f3059957cb4f174731379fe14f5e3da878e27ca6e8"} Jan 30 23:01:38 crc kubenswrapper[4751]: I0130 23:01:38.707463 4751 generic.go:334] "Generic (PLEG): container finished" podID="66a67eb5-04b0-4bcd-814d-e59031703d25" containerID="6aca60af051abbfa2e99b1f3059957cb4f174731379fe14f5e3da878e27ca6e8" exitCode=0 Jan 30 23:01:38 crc kubenswrapper[4751]: I0130 23:01:38.707812 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzbtf" event={"ID":"66a67eb5-04b0-4bcd-814d-e59031703d25","Type":"ContainerDied","Data":"6aca60af051abbfa2e99b1f3059957cb4f174731379fe14f5e3da878e27ca6e8"} Jan 30 23:01:39 crc kubenswrapper[4751]: I0130 23:01:39.720995 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzbtf" event={"ID":"66a67eb5-04b0-4bcd-814d-e59031703d25","Type":"ContainerStarted","Data":"8c515bda47d2ccd57f53dd3e5c9a6578a8a8721434831d49d4334d68a31a9db8"} Jan 30 23:01:39 crc kubenswrapper[4751]: I0130 23:01:39.749512 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pzbtf" podStartSLOduration=2.221068455 podStartE2EDuration="5.749492163s" podCreationTimestamp="2026-01-30 23:01:34 +0000 UTC" firstStartedPulling="2026-01-30 23:01:35.658486859 +0000 UTC m=+6434.404309508" lastFinishedPulling="2026-01-30 23:01:39.186910567 +0000 UTC m=+6437.932733216" observedRunningTime="2026-01-30 23:01:39.741940097 +0000 UTC m=+6438.487762756" watchObservedRunningTime="2026-01-30 23:01:39.749492163 +0000 UTC m=+6438.495314822" Jan 30 23:01:44 crc kubenswrapper[4751]: I0130 23:01:44.447378 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:44 crc kubenswrapper[4751]: I0130 23:01:44.448042 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:45 crc kubenswrapper[4751]: I0130 23:01:45.495407 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-pzbtf" podUID="66a67eb5-04b0-4bcd-814d-e59031703d25" containerName="registry-server" probeResult="failure" output=< Jan 30 23:01:45 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 23:01:45 crc kubenswrapper[4751]: > Jan 30 23:01:46 crc kubenswrapper[4751]: I0130 23:01:46.976238 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:01:46 crc kubenswrapper[4751]: E0130 23:01:46.976820 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:01:49 crc kubenswrapper[4751]: E0130 23:01:49.956253 4751 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.39:51958->38.102.83.39:41127: write tcp 38.102.83.39:51958->38.102.83.39:41127: write: broken pipe Jan 30 23:01:54 crc kubenswrapper[4751]: I0130 23:01:54.510414 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:54 crc kubenswrapper[4751]: I0130 23:01:54.562459 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:54 crc kubenswrapper[4751]: I0130 23:01:54.750681 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pzbtf"] Jan 30 23:01:55 crc kubenswrapper[4751]: I0130 23:01:55.897974 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pzbtf" podUID="66a67eb5-04b0-4bcd-814d-e59031703d25" containerName="registry-server" containerID="cri-o://8c515bda47d2ccd57f53dd3e5c9a6578a8a8721434831d49d4334d68a31a9db8" gracePeriod=2 Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.470698 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.590425 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a67eb5-04b0-4bcd-814d-e59031703d25-catalog-content\") pod \"66a67eb5-04b0-4bcd-814d-e59031703d25\" (UID: \"66a67eb5-04b0-4bcd-814d-e59031703d25\") " Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.590721 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hz9rq\" (UniqueName: \"kubernetes.io/projected/66a67eb5-04b0-4bcd-814d-e59031703d25-kube-api-access-hz9rq\") pod \"66a67eb5-04b0-4bcd-814d-e59031703d25\" (UID: \"66a67eb5-04b0-4bcd-814d-e59031703d25\") " Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.590829 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a67eb5-04b0-4bcd-814d-e59031703d25-utilities\") pod \"66a67eb5-04b0-4bcd-814d-e59031703d25\" (UID: \"66a67eb5-04b0-4bcd-814d-e59031703d25\") " Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.592167 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66a67eb5-04b0-4bcd-814d-e59031703d25-utilities" (OuterVolumeSpecName: "utilities") pod "66a67eb5-04b0-4bcd-814d-e59031703d25" (UID: "66a67eb5-04b0-4bcd-814d-e59031703d25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.604715 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66a67eb5-04b0-4bcd-814d-e59031703d25-kube-api-access-hz9rq" (OuterVolumeSpecName: "kube-api-access-hz9rq") pod "66a67eb5-04b0-4bcd-814d-e59031703d25" (UID: "66a67eb5-04b0-4bcd-814d-e59031703d25"). InnerVolumeSpecName "kube-api-access-hz9rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.653810 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66a67eb5-04b0-4bcd-814d-e59031703d25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66a67eb5-04b0-4bcd-814d-e59031703d25" (UID: "66a67eb5-04b0-4bcd-814d-e59031703d25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.694894 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hz9rq\" (UniqueName: \"kubernetes.io/projected/66a67eb5-04b0-4bcd-814d-e59031703d25-kube-api-access-hz9rq\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.695443 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a67eb5-04b0-4bcd-814d-e59031703d25-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.695636 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a67eb5-04b0-4bcd-814d-e59031703d25-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.913593 4751 generic.go:334] "Generic (PLEG): container finished" podID="66a67eb5-04b0-4bcd-814d-e59031703d25" containerID="8c515bda47d2ccd57f53dd3e5c9a6578a8a8721434831d49d4334d68a31a9db8" exitCode=0 Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.913641 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzbtf" event={"ID":"66a67eb5-04b0-4bcd-814d-e59031703d25","Type":"ContainerDied","Data":"8c515bda47d2ccd57f53dd3e5c9a6578a8a8721434831d49d4334d68a31a9db8"} Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.913680 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzbtf" event={"ID":"66a67eb5-04b0-4bcd-814d-e59031703d25","Type":"ContainerDied","Data":"6ec1a8fb44f2780198407768eba5cde4184587063cd63e961da2205d4fa2e8c7"} Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.913700 4751 scope.go:117] "RemoveContainer" containerID="8c515bda47d2ccd57f53dd3e5c9a6578a8a8721434831d49d4334d68a31a9db8" Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.913702 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.949002 4751 scope.go:117] "RemoveContainer" containerID="6aca60af051abbfa2e99b1f3059957cb4f174731379fe14f5e3da878e27ca6e8" Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.990284 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pzbtf"] Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.995890 4751 scope.go:117] "RemoveContainer" containerID="3ac9deb2c6302801541213cb3eea491c1284f486d5a6d1433bc5d56708c0acea" Jan 30 23:01:57 crc kubenswrapper[4751]: I0130 23:01:57.007145 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pzbtf"] Jan 30 23:01:57 crc kubenswrapper[4751]: I0130 23:01:57.125483 4751 scope.go:117] "RemoveContainer" containerID="8c515bda47d2ccd57f53dd3e5c9a6578a8a8721434831d49d4334d68a31a9db8" Jan 30 23:01:57 crc kubenswrapper[4751]: E0130 23:01:57.137361 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c515bda47d2ccd57f53dd3e5c9a6578a8a8721434831d49d4334d68a31a9db8\": container with ID starting with 8c515bda47d2ccd57f53dd3e5c9a6578a8a8721434831d49d4334d68a31a9db8 not found: ID does not exist" containerID="8c515bda47d2ccd57f53dd3e5c9a6578a8a8721434831d49d4334d68a31a9db8" Jan 30 23:01:57 crc kubenswrapper[4751]: I0130 23:01:57.137424 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c515bda47d2ccd57f53dd3e5c9a6578a8a8721434831d49d4334d68a31a9db8"} err="failed to get container status \"8c515bda47d2ccd57f53dd3e5c9a6578a8a8721434831d49d4334d68a31a9db8\": rpc error: code = NotFound desc = could not find container \"8c515bda47d2ccd57f53dd3e5c9a6578a8a8721434831d49d4334d68a31a9db8\": container with ID starting with 8c515bda47d2ccd57f53dd3e5c9a6578a8a8721434831d49d4334d68a31a9db8 not found: ID does not exist" Jan 30 23:01:57 crc kubenswrapper[4751]: I0130 23:01:57.137449 4751 scope.go:117] "RemoveContainer" containerID="6aca60af051abbfa2e99b1f3059957cb4f174731379fe14f5e3da878e27ca6e8" Jan 30 23:01:57 crc kubenswrapper[4751]: E0130 23:01:57.141493 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6aca60af051abbfa2e99b1f3059957cb4f174731379fe14f5e3da878e27ca6e8\": container with ID starting with 6aca60af051abbfa2e99b1f3059957cb4f174731379fe14f5e3da878e27ca6e8 not found: ID does not exist" containerID="6aca60af051abbfa2e99b1f3059957cb4f174731379fe14f5e3da878e27ca6e8" Jan 30 23:01:57 crc kubenswrapper[4751]: I0130 23:01:57.141683 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aca60af051abbfa2e99b1f3059957cb4f174731379fe14f5e3da878e27ca6e8"} err="failed to get container status \"6aca60af051abbfa2e99b1f3059957cb4f174731379fe14f5e3da878e27ca6e8\": rpc error: code = NotFound desc = could not find container \"6aca60af051abbfa2e99b1f3059957cb4f174731379fe14f5e3da878e27ca6e8\": container with ID starting with 6aca60af051abbfa2e99b1f3059957cb4f174731379fe14f5e3da878e27ca6e8 not found: ID does not exist" Jan 30 23:01:57 crc kubenswrapper[4751]: I0130 23:01:57.141783 4751 scope.go:117] "RemoveContainer" containerID="3ac9deb2c6302801541213cb3eea491c1284f486d5a6d1433bc5d56708c0acea" Jan 30 23:01:57 crc kubenswrapper[4751]: E0130 23:01:57.147477 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ac9deb2c6302801541213cb3eea491c1284f486d5a6d1433bc5d56708c0acea\": container with ID starting with 3ac9deb2c6302801541213cb3eea491c1284f486d5a6d1433bc5d56708c0acea not found: ID does not exist" containerID="3ac9deb2c6302801541213cb3eea491c1284f486d5a6d1433bc5d56708c0acea" Jan 30 23:01:57 crc kubenswrapper[4751]: I0130 23:01:57.147752 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ac9deb2c6302801541213cb3eea491c1284f486d5a6d1433bc5d56708c0acea"} err="failed to get container status \"3ac9deb2c6302801541213cb3eea491c1284f486d5a6d1433bc5d56708c0acea\": rpc error: code = NotFound desc = could not find container \"3ac9deb2c6302801541213cb3eea491c1284f486d5a6d1433bc5d56708c0acea\": container with ID starting with 3ac9deb2c6302801541213cb3eea491c1284f486d5a6d1433bc5d56708c0acea not found: ID does not exist" Jan 30 23:01:58 crc kubenswrapper[4751]: I0130 23:01:58.010283 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66a67eb5-04b0-4bcd-814d-e59031703d25" path="/var/lib/kubelet/pods/66a67eb5-04b0-4bcd-814d-e59031703d25/volumes" Jan 30 23:01:58 crc kubenswrapper[4751]: I0130 23:01:58.976020 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:01:58 crc kubenswrapper[4751]: E0130 23:01:58.976680 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:02:10 crc kubenswrapper[4751]: I0130 23:02:10.976395 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:02:10 crc kubenswrapper[4751]: E0130 23:02:10.977223 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:02:22 crc kubenswrapper[4751]: I0130 23:02:22.976283 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:02:22 crc kubenswrapper[4751]: E0130 23:02:22.978666 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:02:35 crc kubenswrapper[4751]: I0130 23:02:35.976002 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:02:35 crc kubenswrapper[4751]: E0130 23:02:35.977302 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:02:50 crc kubenswrapper[4751]: I0130 23:02:50.975668 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:02:50 crc kubenswrapper[4751]: E0130 23:02:50.976467 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:03:04 crc kubenswrapper[4751]: I0130 23:03:04.976015 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:03:04 crc kubenswrapper[4751]: E0130 23:03:04.976996 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:03:19 crc kubenswrapper[4751]: I0130 23:03:19.976409 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:03:19 crc kubenswrapper[4751]: E0130 23:03:19.977343 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:03:22 crc kubenswrapper[4751]: I0130 23:03:22.930554 4751 generic.go:334] "Generic (PLEG): container finished" podID="bc2d69f7-78aa-4618-a287-008258e34b47" containerID="d98a640d38d1a0008a9787079a3ae73e9ed1113f5304435185bdeae2c0722cd9" exitCode=0 Jan 30 23:03:22 crc kubenswrapper[4751]: I0130 23:03:22.930668 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-89qbh/must-gather-xtff4" event={"ID":"bc2d69f7-78aa-4618-a287-008258e34b47","Type":"ContainerDied","Data":"d98a640d38d1a0008a9787079a3ae73e9ed1113f5304435185bdeae2c0722cd9"} Jan 30 23:03:22 crc kubenswrapper[4751]: I0130 23:03:22.932315 4751 scope.go:117] "RemoveContainer" containerID="d98a640d38d1a0008a9787079a3ae73e9ed1113f5304435185bdeae2c0722cd9" Jan 30 23:03:23 crc kubenswrapper[4751]: I0130 23:03:23.420187 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-89qbh_must-gather-xtff4_bc2d69f7-78aa-4618-a287-008258e34b47/gather/0.log" Jan 30 23:03:26 crc kubenswrapper[4751]: E0130 23:03:26.105470 4751 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.39:59878->38.102.83.39:41127: read tcp 38.102.83.39:59878->38.102.83.39:41127: read: connection reset by peer Jan 30 23:03:30 crc kubenswrapper[4751]: I0130 23:03:30.976068 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:03:30 crc kubenswrapper[4751]: E0130 23:03:30.976819 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:03:31 crc kubenswrapper[4751]: I0130 23:03:31.181177 4751 scope.go:117] "RemoveContainer" containerID="be6e18572bbebc1a7aec700bd2eb90d12dc04a78e2daff85c59a029c18a1fcc3" Jan 30 23:03:31 crc kubenswrapper[4751]: I0130 23:03:31.224857 4751 scope.go:117] "RemoveContainer" containerID="77378602a044fe43cd54d596511e6b12c14155716b7a67523c049f4f81292b13" Jan 30 23:03:32 crc kubenswrapper[4751]: I0130 23:03:32.866495 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-89qbh/must-gather-xtff4"] Jan 30 23:03:32 crc kubenswrapper[4751]: I0130 23:03:32.885516 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-89qbh/must-gather-xtff4"] Jan 30 23:03:32 crc kubenswrapper[4751]: I0130 23:03:32.894359 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-89qbh/must-gather-xtff4" podUID="bc2d69f7-78aa-4618-a287-008258e34b47" containerName="copy" containerID="cri-o://f02334ba80bf205d6fb3fa9e2fb257541f03ce6ab3c97fbdf7d1d6f815819596" gracePeriod=2 Jan 30 23:03:33 crc kubenswrapper[4751]: I0130 23:03:33.068723 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-89qbh_must-gather-xtff4_bc2d69f7-78aa-4618-a287-008258e34b47/copy/0.log" Jan 30 23:03:33 crc kubenswrapper[4751]: I0130 23:03:33.070831 4751 generic.go:334] "Generic (PLEG): container finished" podID="bc2d69f7-78aa-4618-a287-008258e34b47" containerID="f02334ba80bf205d6fb3fa9e2fb257541f03ce6ab3c97fbdf7d1d6f815819596" exitCode=143 Jan 30 23:03:33 crc kubenswrapper[4751]: I0130 23:03:33.589536 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-89qbh_must-gather-xtff4_bc2d69f7-78aa-4618-a287-008258e34b47/copy/0.log" Jan 30 23:03:33 crc kubenswrapper[4751]: I0130 23:03:33.590047 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/must-gather-xtff4" Jan 30 23:03:33 crc kubenswrapper[4751]: I0130 23:03:33.678903 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bc2d69f7-78aa-4618-a287-008258e34b47-must-gather-output\") pod \"bc2d69f7-78aa-4618-a287-008258e34b47\" (UID: \"bc2d69f7-78aa-4618-a287-008258e34b47\") " Jan 30 23:03:33 crc kubenswrapper[4751]: I0130 23:03:33.684251 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d2qv\" (UniqueName: \"kubernetes.io/projected/bc2d69f7-78aa-4618-a287-008258e34b47-kube-api-access-5d2qv\") pod \"bc2d69f7-78aa-4618-a287-008258e34b47\" (UID: \"bc2d69f7-78aa-4618-a287-008258e34b47\") " Jan 30 23:03:33 crc kubenswrapper[4751]: I0130 23:03:33.710375 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2d69f7-78aa-4618-a287-008258e34b47-kube-api-access-5d2qv" (OuterVolumeSpecName: "kube-api-access-5d2qv") pod "bc2d69f7-78aa-4618-a287-008258e34b47" (UID: "bc2d69f7-78aa-4618-a287-008258e34b47"). InnerVolumeSpecName "kube-api-access-5d2qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:03:33 crc kubenswrapper[4751]: I0130 23:03:33.790266 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5d2qv\" (UniqueName: \"kubernetes.io/projected/bc2d69f7-78aa-4618-a287-008258e34b47-kube-api-access-5d2qv\") on node \"crc\" DevicePath \"\"" Jan 30 23:03:33 crc kubenswrapper[4751]: I0130 23:03:33.886588 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc2d69f7-78aa-4618-a287-008258e34b47-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bc2d69f7-78aa-4618-a287-008258e34b47" (UID: "bc2d69f7-78aa-4618-a287-008258e34b47"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:03:33 crc kubenswrapper[4751]: I0130 23:03:33.892956 4751 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bc2d69f7-78aa-4618-a287-008258e34b47-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 23:03:33 crc kubenswrapper[4751]: I0130 23:03:33.988275 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc2d69f7-78aa-4618-a287-008258e34b47" path="/var/lib/kubelet/pods/bc2d69f7-78aa-4618-a287-008258e34b47/volumes" Jan 30 23:03:34 crc kubenswrapper[4751]: I0130 23:03:34.082553 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-89qbh_must-gather-xtff4_bc2d69f7-78aa-4618-a287-008258e34b47/copy/0.log" Jan 30 23:03:34 crc kubenswrapper[4751]: I0130 23:03:34.083400 4751 scope.go:117] "RemoveContainer" containerID="f02334ba80bf205d6fb3fa9e2fb257541f03ce6ab3c97fbdf7d1d6f815819596" Jan 30 23:03:34 crc kubenswrapper[4751]: I0130 23:03:34.083437 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/must-gather-xtff4" Jan 30 23:03:34 crc kubenswrapper[4751]: I0130 23:03:34.123383 4751 scope.go:117] "RemoveContainer" containerID="d98a640d38d1a0008a9787079a3ae73e9ed1113f5304435185bdeae2c0722cd9" Jan 30 23:03:42 crc kubenswrapper[4751]: I0130 23:03:42.976446 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:03:42 crc kubenswrapper[4751]: E0130 23:03:42.977244 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:03:53 crc kubenswrapper[4751]: I0130 23:03:53.976319 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:03:53 crc kubenswrapper[4751]: E0130 23:03:53.978745 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:04:05 crc kubenswrapper[4751]: I0130 23:04:05.975734 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:04:05 crc kubenswrapper[4751]: E0130 23:04:05.976822 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:04:18 crc kubenswrapper[4751]: I0130 23:04:18.977304 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:04:18 crc kubenswrapper[4751]: E0130 23:04:18.978700 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:04:31 crc kubenswrapper[4751]: I0130 23:04:31.982804 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:04:31 crc kubenswrapper[4751]: E0130 23:04:31.983612 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:04:43 crc kubenswrapper[4751]: I0130 23:04:43.544849 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:04:43 crc kubenswrapper[4751]: E0130 23:04:43.550710 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:04:57 crc kubenswrapper[4751]: I0130 23:04:57.976586 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:04:58 crc kubenswrapper[4751]: I0130 23:04:58.777060 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"c0845ebeb9e2f3643b084913909a3731e29a9707e2ccc2dbf0c44b6138b618a8"} Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.489664 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b4mk6"] Jan 30 23:06:32 crc kubenswrapper[4751]: E0130 23:06:32.492176 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a67eb5-04b0-4bcd-814d-e59031703d25" containerName="extract-content" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.492223 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a67eb5-04b0-4bcd-814d-e59031703d25" containerName="extract-content" Jan 30 23:06:32 crc kubenswrapper[4751]: E0130 23:06:32.492246 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a67eb5-04b0-4bcd-814d-e59031703d25" containerName="registry-server" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.492257 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a67eb5-04b0-4bcd-814d-e59031703d25" containerName="registry-server" Jan 30 23:06:32 crc kubenswrapper[4751]: E0130 23:06:32.492319 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2d69f7-78aa-4618-a287-008258e34b47" containerName="gather" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.492358 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2d69f7-78aa-4618-a287-008258e34b47" containerName="gather" Jan 30 23:06:32 crc kubenswrapper[4751]: E0130 23:06:32.492419 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2d69f7-78aa-4618-a287-008258e34b47" containerName="copy" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.492432 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2d69f7-78aa-4618-a287-008258e34b47" containerName="copy" Jan 30 23:06:32 crc kubenswrapper[4751]: E0130 23:06:32.492500 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a67eb5-04b0-4bcd-814d-e59031703d25" containerName="extract-utilities" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.492513 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a67eb5-04b0-4bcd-814d-e59031703d25" containerName="extract-utilities" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.493623 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc2d69f7-78aa-4618-a287-008258e34b47" containerName="copy" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.493672 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc2d69f7-78aa-4618-a287-008258e34b47" containerName="gather" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.493727 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="66a67eb5-04b0-4bcd-814d-e59031703d25" containerName="registry-server" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.499653 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.523278 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b4mk6"] Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.637211 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e472c7cc-765c-470f-95aa-3982eefa2753-utilities\") pod \"redhat-marketplace-b4mk6\" (UID: \"e472c7cc-765c-470f-95aa-3982eefa2753\") " pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.637287 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psqzg\" (UniqueName: \"kubernetes.io/projected/e472c7cc-765c-470f-95aa-3982eefa2753-kube-api-access-psqzg\") pod \"redhat-marketplace-b4mk6\" (UID: \"e472c7cc-765c-470f-95aa-3982eefa2753\") " pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.637398 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e472c7cc-765c-470f-95aa-3982eefa2753-catalog-content\") pod \"redhat-marketplace-b4mk6\" (UID: \"e472c7cc-765c-470f-95aa-3982eefa2753\") " pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.739796 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psqzg\" (UniqueName: \"kubernetes.io/projected/e472c7cc-765c-470f-95aa-3982eefa2753-kube-api-access-psqzg\") pod \"redhat-marketplace-b4mk6\" (UID: \"e472c7cc-765c-470f-95aa-3982eefa2753\") " pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.739901 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e472c7cc-765c-470f-95aa-3982eefa2753-catalog-content\") pod \"redhat-marketplace-b4mk6\" (UID: \"e472c7cc-765c-470f-95aa-3982eefa2753\") " pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.740136 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e472c7cc-765c-470f-95aa-3982eefa2753-utilities\") pod \"redhat-marketplace-b4mk6\" (UID: \"e472c7cc-765c-470f-95aa-3982eefa2753\") " pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.740422 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e472c7cc-765c-470f-95aa-3982eefa2753-catalog-content\") pod \"redhat-marketplace-b4mk6\" (UID: \"e472c7cc-765c-470f-95aa-3982eefa2753\") " pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.740618 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e472c7cc-765c-470f-95aa-3982eefa2753-utilities\") pod \"redhat-marketplace-b4mk6\" (UID: \"e472c7cc-765c-470f-95aa-3982eefa2753\") " pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.763625 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psqzg\" (UniqueName: \"kubernetes.io/projected/e472c7cc-765c-470f-95aa-3982eefa2753-kube-api-access-psqzg\") pod \"redhat-marketplace-b4mk6\" (UID: \"e472c7cc-765c-470f-95aa-3982eefa2753\") " pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.829821 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:33 crc kubenswrapper[4751]: I0130 23:06:33.345313 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b4mk6"] Jan 30 23:06:34 crc kubenswrapper[4751]: I0130 23:06:34.034177 4751 generic.go:334] "Generic (PLEG): container finished" podID="e472c7cc-765c-470f-95aa-3982eefa2753" containerID="f31ea8114f78928184f391b655f5d22472a79bf7bc4bea3334837d27ff413acd" exitCode=0 Jan 30 23:06:34 crc kubenswrapper[4751]: I0130 23:06:34.034239 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4mk6" event={"ID":"e472c7cc-765c-470f-95aa-3982eefa2753","Type":"ContainerDied","Data":"f31ea8114f78928184f391b655f5d22472a79bf7bc4bea3334837d27ff413acd"} Jan 30 23:06:34 crc kubenswrapper[4751]: I0130 23:06:34.034476 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4mk6" event={"ID":"e472c7cc-765c-470f-95aa-3982eefa2753","Type":"ContainerStarted","Data":"6b9d30ce2f94bfd7ccb71aeea4e3f32cb9dc52bc57142b67658be1b628cc6d99"} Jan 30 23:06:34 crc kubenswrapper[4751]: I0130 23:06:34.037376 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 23:06:36 crc kubenswrapper[4751]: I0130 23:06:36.078470 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4mk6" event={"ID":"e472c7cc-765c-470f-95aa-3982eefa2753","Type":"ContainerStarted","Data":"32fb450570e54dcbfe979f84d03debac4683fc54a50c4ae8a0f9a596d33df314"} Jan 30 23:06:37 crc kubenswrapper[4751]: I0130 23:06:37.093492 4751 generic.go:334] "Generic (PLEG): container finished" podID="e472c7cc-765c-470f-95aa-3982eefa2753" containerID="32fb450570e54dcbfe979f84d03debac4683fc54a50c4ae8a0f9a596d33df314" exitCode=0 Jan 30 23:06:37 crc kubenswrapper[4751]: I0130 23:06:37.093550 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4mk6" event={"ID":"e472c7cc-765c-470f-95aa-3982eefa2753","Type":"ContainerDied","Data":"32fb450570e54dcbfe979f84d03debac4683fc54a50c4ae8a0f9a596d33df314"} Jan 30 23:06:38 crc kubenswrapper[4751]: I0130 23:06:38.108398 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4mk6" event={"ID":"e472c7cc-765c-470f-95aa-3982eefa2753","Type":"ContainerStarted","Data":"0ada19261b0536d23d30c98eafdd81444fda732b69e0166994e9fa19e4ea9177"} Jan 30 23:06:38 crc kubenswrapper[4751]: I0130 23:06:38.129170 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b4mk6" podStartSLOduration=2.647896669 podStartE2EDuration="6.129148238s" podCreationTimestamp="2026-01-30 23:06:32 +0000 UTC" firstStartedPulling="2026-01-30 23:06:34.036036256 +0000 UTC m=+6732.781858905" lastFinishedPulling="2026-01-30 23:06:37.517287825 +0000 UTC m=+6736.263110474" observedRunningTime="2026-01-30 23:06:38.126944577 +0000 UTC m=+6736.872767256" watchObservedRunningTime="2026-01-30 23:06:38.129148238 +0000 UTC m=+6736.874970897" Jan 30 23:06:42 crc kubenswrapper[4751]: I0130 23:06:42.830174 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:42 crc kubenswrapper[4751]: I0130 23:06:42.830836 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:43 crc kubenswrapper[4751]: I0130 23:06:43.911516 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-b4mk6" podUID="e472c7cc-765c-470f-95aa-3982eefa2753" containerName="registry-server" probeResult="failure" output=< Jan 30 23:06:43 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 23:06:43 crc kubenswrapper[4751]: > Jan 30 23:06:52 crc kubenswrapper[4751]: I0130 23:06:52.961801 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:53 crc kubenswrapper[4751]: I0130 23:06:53.007486 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:53 crc kubenswrapper[4751]: I0130 23:06:53.202696 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b4mk6"] Jan 30 23:06:54 crc kubenswrapper[4751]: I0130 23:06:54.348166 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b4mk6" podUID="e472c7cc-765c-470f-95aa-3982eefa2753" containerName="registry-server" containerID="cri-o://0ada19261b0536d23d30c98eafdd81444fda732b69e0166994e9fa19e4ea9177" gracePeriod=2 Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:54.936644 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.022997 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psqzg\" (UniqueName: \"kubernetes.io/projected/e472c7cc-765c-470f-95aa-3982eefa2753-kube-api-access-psqzg\") pod \"e472c7cc-765c-470f-95aa-3982eefa2753\" (UID: \"e472c7cc-765c-470f-95aa-3982eefa2753\") " Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.023133 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e472c7cc-765c-470f-95aa-3982eefa2753-catalog-content\") pod \"e472c7cc-765c-470f-95aa-3982eefa2753\" (UID: \"e472c7cc-765c-470f-95aa-3982eefa2753\") " Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.023423 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e472c7cc-765c-470f-95aa-3982eefa2753-utilities\") pod \"e472c7cc-765c-470f-95aa-3982eefa2753\" (UID: \"e472c7cc-765c-470f-95aa-3982eefa2753\") " Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.024091 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e472c7cc-765c-470f-95aa-3982eefa2753-utilities" (OuterVolumeSpecName: "utilities") pod "e472c7cc-765c-470f-95aa-3982eefa2753" (UID: "e472c7cc-765c-470f-95aa-3982eefa2753"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.029604 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e472c7cc-765c-470f-95aa-3982eefa2753-kube-api-access-psqzg" (OuterVolumeSpecName: "kube-api-access-psqzg") pod "e472c7cc-765c-470f-95aa-3982eefa2753" (UID: "e472c7cc-765c-470f-95aa-3982eefa2753"). InnerVolumeSpecName "kube-api-access-psqzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.050344 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e472c7cc-765c-470f-95aa-3982eefa2753-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e472c7cc-765c-470f-95aa-3982eefa2753" (UID: "e472c7cc-765c-470f-95aa-3982eefa2753"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.127054 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e472c7cc-765c-470f-95aa-3982eefa2753-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.127083 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psqzg\" (UniqueName: \"kubernetes.io/projected/e472c7cc-765c-470f-95aa-3982eefa2753-kube-api-access-psqzg\") on node \"crc\" DevicePath \"\"" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.127093 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e472c7cc-765c-470f-95aa-3982eefa2753-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.362261 4751 generic.go:334] "Generic (PLEG): container finished" podID="e472c7cc-765c-470f-95aa-3982eefa2753" containerID="0ada19261b0536d23d30c98eafdd81444fda732b69e0166994e9fa19e4ea9177" exitCode=0 Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.362362 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4mk6" event={"ID":"e472c7cc-765c-470f-95aa-3982eefa2753","Type":"ContainerDied","Data":"0ada19261b0536d23d30c98eafdd81444fda732b69e0166994e9fa19e4ea9177"} Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.362611 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4mk6" event={"ID":"e472c7cc-765c-470f-95aa-3982eefa2753","Type":"ContainerDied","Data":"6b9d30ce2f94bfd7ccb71aeea4e3f32cb9dc52bc57142b67658be1b628cc6d99"} Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.362633 4751 scope.go:117] "RemoveContainer" containerID="0ada19261b0536d23d30c98eafdd81444fda732b69e0166994e9fa19e4ea9177" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.362425 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.383065 4751 scope.go:117] "RemoveContainer" containerID="32fb450570e54dcbfe979f84d03debac4683fc54a50c4ae8a0f9a596d33df314" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.402022 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b4mk6"] Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.415857 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b4mk6"] Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.438546 4751 scope.go:117] "RemoveContainer" containerID="f31ea8114f78928184f391b655f5d22472a79bf7bc4bea3334837d27ff413acd" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.470154 4751 scope.go:117] "RemoveContainer" containerID="0ada19261b0536d23d30c98eafdd81444fda732b69e0166994e9fa19e4ea9177" Jan 30 23:06:55 crc kubenswrapper[4751]: E0130 23:06:55.471534 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ada19261b0536d23d30c98eafdd81444fda732b69e0166994e9fa19e4ea9177\": container with ID starting with 0ada19261b0536d23d30c98eafdd81444fda732b69e0166994e9fa19e4ea9177 not found: ID does not exist" containerID="0ada19261b0536d23d30c98eafdd81444fda732b69e0166994e9fa19e4ea9177" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.471570 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ada19261b0536d23d30c98eafdd81444fda732b69e0166994e9fa19e4ea9177"} err="failed to get container status \"0ada19261b0536d23d30c98eafdd81444fda732b69e0166994e9fa19e4ea9177\": rpc error: code = NotFound desc = could not find container \"0ada19261b0536d23d30c98eafdd81444fda732b69e0166994e9fa19e4ea9177\": container with ID starting with 0ada19261b0536d23d30c98eafdd81444fda732b69e0166994e9fa19e4ea9177 not found: ID does not exist" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.471592 4751 scope.go:117] "RemoveContainer" containerID="32fb450570e54dcbfe979f84d03debac4683fc54a50c4ae8a0f9a596d33df314" Jan 30 23:06:55 crc kubenswrapper[4751]: E0130 23:06:55.471931 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32fb450570e54dcbfe979f84d03debac4683fc54a50c4ae8a0f9a596d33df314\": container with ID starting with 32fb450570e54dcbfe979f84d03debac4683fc54a50c4ae8a0f9a596d33df314 not found: ID does not exist" containerID="32fb450570e54dcbfe979f84d03debac4683fc54a50c4ae8a0f9a596d33df314" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.471957 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32fb450570e54dcbfe979f84d03debac4683fc54a50c4ae8a0f9a596d33df314"} err="failed to get container status \"32fb450570e54dcbfe979f84d03debac4683fc54a50c4ae8a0f9a596d33df314\": rpc error: code = NotFound desc = could not find container \"32fb450570e54dcbfe979f84d03debac4683fc54a50c4ae8a0f9a596d33df314\": container with ID starting with 32fb450570e54dcbfe979f84d03debac4683fc54a50c4ae8a0f9a596d33df314 not found: ID does not exist" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.471976 4751 scope.go:117] "RemoveContainer" containerID="f31ea8114f78928184f391b655f5d22472a79bf7bc4bea3334837d27ff413acd" Jan 30 23:06:55 crc kubenswrapper[4751]: E0130 23:06:55.472486 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f31ea8114f78928184f391b655f5d22472a79bf7bc4bea3334837d27ff413acd\": container with ID starting with f31ea8114f78928184f391b655f5d22472a79bf7bc4bea3334837d27ff413acd not found: ID does not exist" containerID="f31ea8114f78928184f391b655f5d22472a79bf7bc4bea3334837d27ff413acd" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.472512 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f31ea8114f78928184f391b655f5d22472a79bf7bc4bea3334837d27ff413acd"} err="failed to get container status \"f31ea8114f78928184f391b655f5d22472a79bf7bc4bea3334837d27ff413acd\": rpc error: code = NotFound desc = could not find container \"f31ea8114f78928184f391b655f5d22472a79bf7bc4bea3334837d27ff413acd\": container with ID starting with f31ea8114f78928184f391b655f5d22472a79bf7bc4bea3334837d27ff413acd not found: ID does not exist" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.994063 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e472c7cc-765c-470f-95aa-3982eefa2753" path="/var/lib/kubelet/pods/e472c7cc-765c-470f-95aa-3982eefa2753/volumes" Jan 30 23:07:13 crc kubenswrapper[4751]: I0130 23:07:13.434728 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6989c95c85-6thsl" podUID="68910b8d-2ec3-4b7c-956c-e3d3518042cf" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 30 23:07:24 crc kubenswrapper[4751]: I0130 23:07:24.126712 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:07:24 crc kubenswrapper[4751]: I0130 23:07:24.127202 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.659617 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p2b44"] Jan 30 23:07:32 crc kubenswrapper[4751]: E0130 23:07:32.660548 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e472c7cc-765c-470f-95aa-3982eefa2753" containerName="extract-utilities" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.660561 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e472c7cc-765c-470f-95aa-3982eefa2753" containerName="extract-utilities" Jan 30 23:07:32 crc kubenswrapper[4751]: E0130 23:07:32.660573 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e472c7cc-765c-470f-95aa-3982eefa2753" containerName="extract-content" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.660582 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e472c7cc-765c-470f-95aa-3982eefa2753" containerName="extract-content" Jan 30 23:07:32 crc kubenswrapper[4751]: E0130 23:07:32.660602 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e472c7cc-765c-470f-95aa-3982eefa2753" containerName="registry-server" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.660610 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e472c7cc-765c-470f-95aa-3982eefa2753" containerName="registry-server" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.660833 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e472c7cc-765c-470f-95aa-3982eefa2753" containerName="registry-server" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.664210 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.677120 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p2b44"] Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.746653 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/310d0b6f-f293-446a-8648-ca291f0f429b-utilities\") pod \"redhat-operators-p2b44\" (UID: \"310d0b6f-f293-446a-8648-ca291f0f429b\") " pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.746718 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/310d0b6f-f293-446a-8648-ca291f0f429b-catalog-content\") pod \"redhat-operators-p2b44\" (UID: \"310d0b6f-f293-446a-8648-ca291f0f429b\") " pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.746887 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf7bg\" (UniqueName: \"kubernetes.io/projected/310d0b6f-f293-446a-8648-ca291f0f429b-kube-api-access-pf7bg\") pod \"redhat-operators-p2b44\" (UID: \"310d0b6f-f293-446a-8648-ca291f0f429b\") " pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.849319 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/310d0b6f-f293-446a-8648-ca291f0f429b-utilities\") pod \"redhat-operators-p2b44\" (UID: \"310d0b6f-f293-446a-8648-ca291f0f429b\") " pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.849695 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/310d0b6f-f293-446a-8648-ca291f0f429b-catalog-content\") pod \"redhat-operators-p2b44\" (UID: \"310d0b6f-f293-446a-8648-ca291f0f429b\") " pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.849746 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf7bg\" (UniqueName: \"kubernetes.io/projected/310d0b6f-f293-446a-8648-ca291f0f429b-kube-api-access-pf7bg\") pod \"redhat-operators-p2b44\" (UID: \"310d0b6f-f293-446a-8648-ca291f0f429b\") " pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.850159 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/310d0b6f-f293-446a-8648-ca291f0f429b-catalog-content\") pod \"redhat-operators-p2b44\" (UID: \"310d0b6f-f293-446a-8648-ca291f0f429b\") " pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.850304 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/310d0b6f-f293-446a-8648-ca291f0f429b-utilities\") pod \"redhat-operators-p2b44\" (UID: \"310d0b6f-f293-446a-8648-ca291f0f429b\") " pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.875455 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf7bg\" (UniqueName: \"kubernetes.io/projected/310d0b6f-f293-446a-8648-ca291f0f429b-kube-api-access-pf7bg\") pod \"redhat-operators-p2b44\" (UID: \"310d0b6f-f293-446a-8648-ca291f0f429b\") " pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:07:33 crc kubenswrapper[4751]: I0130 23:07:33.009445 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:07:33 crc kubenswrapper[4751]: I0130 23:07:33.602064 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p2b44"] Jan 30 23:07:33 crc kubenswrapper[4751]: I0130 23:07:33.822868 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2b44" event={"ID":"310d0b6f-f293-446a-8648-ca291f0f429b","Type":"ContainerStarted","Data":"d0b5b81d968d8999b59ba948d3f60aa48b354267086a1f491f86c20822b3a714"} Jan 30 23:07:34 crc kubenswrapper[4751]: I0130 23:07:34.833776 4751 generic.go:334] "Generic (PLEG): container finished" podID="310d0b6f-f293-446a-8648-ca291f0f429b" containerID="0065d4cf9e477f08764ff504c4ca1f4ee5abf10fe8fe66bf90f4d02c315d7b49" exitCode=0 Jan 30 23:07:34 crc kubenswrapper[4751]: I0130 23:07:34.834022 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2b44" event={"ID":"310d0b6f-f293-446a-8648-ca291f0f429b","Type":"ContainerDied","Data":"0065d4cf9e477f08764ff504c4ca1f4ee5abf10fe8fe66bf90f4d02c315d7b49"} Jan 30 23:07:35 crc kubenswrapper[4751]: I0130 23:07:35.851890 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2b44" event={"ID":"310d0b6f-f293-446a-8648-ca291f0f429b","Type":"ContainerStarted","Data":"e883196a3480e1d87066cd617dc4e804d760c8a84854d7ee0d7dbbb42d1b8da7"} Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.634821 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xvjxw"] Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.638290 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.649354 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xvjxw"] Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.747192 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc499e3-1ee1-422d-8adc-2a493249e84d-catalog-content\") pod \"certified-operators-xvjxw\" (UID: \"0dc499e3-1ee1-422d-8adc-2a493249e84d\") " pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.747404 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc499e3-1ee1-422d-8adc-2a493249e84d-utilities\") pod \"certified-operators-xvjxw\" (UID: \"0dc499e3-1ee1-422d-8adc-2a493249e84d\") " pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.747653 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jh6z\" (UniqueName: \"kubernetes.io/projected/0dc499e3-1ee1-422d-8adc-2a493249e84d-kube-api-access-5jh6z\") pod \"certified-operators-xvjxw\" (UID: \"0dc499e3-1ee1-422d-8adc-2a493249e84d\") " pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.849614 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc499e3-1ee1-422d-8adc-2a493249e84d-catalog-content\") pod \"certified-operators-xvjxw\" (UID: \"0dc499e3-1ee1-422d-8adc-2a493249e84d\") " pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.849698 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc499e3-1ee1-422d-8adc-2a493249e84d-utilities\") pod \"certified-operators-xvjxw\" (UID: \"0dc499e3-1ee1-422d-8adc-2a493249e84d\") " pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.849859 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jh6z\" (UniqueName: \"kubernetes.io/projected/0dc499e3-1ee1-422d-8adc-2a493249e84d-kube-api-access-5jh6z\") pod \"certified-operators-xvjxw\" (UID: \"0dc499e3-1ee1-422d-8adc-2a493249e84d\") " pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.850606 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc499e3-1ee1-422d-8adc-2a493249e84d-catalog-content\") pod \"certified-operators-xvjxw\" (UID: \"0dc499e3-1ee1-422d-8adc-2a493249e84d\") " pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.850599 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc499e3-1ee1-422d-8adc-2a493249e84d-utilities\") pod \"certified-operators-xvjxw\" (UID: \"0dc499e3-1ee1-422d-8adc-2a493249e84d\") " pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.878047 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jh6z\" (UniqueName: \"kubernetes.io/projected/0dc499e3-1ee1-422d-8adc-2a493249e84d-kube-api-access-5jh6z\") pod \"certified-operators-xvjxw\" (UID: \"0dc499e3-1ee1-422d-8adc-2a493249e84d\") " pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.914895 4751 generic.go:334] "Generic (PLEG): container finished" podID="310d0b6f-f293-446a-8648-ca291f0f429b" containerID="e883196a3480e1d87066cd617dc4e804d760c8a84854d7ee0d7dbbb42d1b8da7" exitCode=0 Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.914962 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2b44" event={"ID":"310d0b6f-f293-446a-8648-ca291f0f429b","Type":"ContainerDied","Data":"e883196a3480e1d87066cd617dc4e804d760c8a84854d7ee0d7dbbb42d1b8da7"} Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.963818 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:41 crc kubenswrapper[4751]: I0130 23:07:41.445010 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xvjxw"] Jan 30 23:07:41 crc kubenswrapper[4751]: I0130 23:07:41.945893 4751 generic.go:334] "Generic (PLEG): container finished" podID="0dc499e3-1ee1-422d-8adc-2a493249e84d" containerID="4bc5d3b77a00bae9b5f8954cc101dd6528b6bc713822265185cacb964b1d06f0" exitCode=0 Jan 30 23:07:41 crc kubenswrapper[4751]: I0130 23:07:41.946102 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvjxw" event={"ID":"0dc499e3-1ee1-422d-8adc-2a493249e84d","Type":"ContainerDied","Data":"4bc5d3b77a00bae9b5f8954cc101dd6528b6bc713822265185cacb964b1d06f0"} Jan 30 23:07:41 crc kubenswrapper[4751]: I0130 23:07:41.946345 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvjxw" event={"ID":"0dc499e3-1ee1-422d-8adc-2a493249e84d","Type":"ContainerStarted","Data":"3db843790871be207e59830516f8c964af91d135af682a34952dccf6663b4ce4"} Jan 30 23:07:41 crc kubenswrapper[4751]: I0130 23:07:41.949477 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2b44" event={"ID":"310d0b6f-f293-446a-8648-ca291f0f429b","Type":"ContainerStarted","Data":"23ce04b77992080995a97acd5ff649602eda71a4f1bff530ad5c5bd20f50fcb4"} Jan 30 23:07:41 crc kubenswrapper[4751]: I0130 23:07:41.988767 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p2b44" podStartSLOduration=3.458686529 podStartE2EDuration="9.988751407s" podCreationTimestamp="2026-01-30 23:07:32 +0000 UTC" firstStartedPulling="2026-01-30 23:07:34.836956786 +0000 UTC m=+6793.582779435" lastFinishedPulling="2026-01-30 23:07:41.367021664 +0000 UTC m=+6800.112844313" observedRunningTime="2026-01-30 23:07:41.988451009 +0000 UTC m=+6800.734273658" watchObservedRunningTime="2026-01-30 23:07:41.988751407 +0000 UTC m=+6800.734574056" Jan 30 23:07:42 crc kubenswrapper[4751]: I0130 23:07:42.967516 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvjxw" event={"ID":"0dc499e3-1ee1-422d-8adc-2a493249e84d","Type":"ContainerStarted","Data":"5e655d5977a2334fcbb549875e9c9d0817b2d40617febd343a2467bdb65be2df"} Jan 30 23:07:43 crc kubenswrapper[4751]: I0130 23:07:43.009943 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:07:43 crc kubenswrapper[4751]: I0130 23:07:43.010005 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:07:44 crc kubenswrapper[4751]: I0130 23:07:44.062148 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p2b44" podUID="310d0b6f-f293-446a-8648-ca291f0f429b" containerName="registry-server" probeResult="failure" output=< Jan 30 23:07:44 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 23:07:44 crc kubenswrapper[4751]: > Jan 30 23:07:44 crc kubenswrapper[4751]: I0130 23:07:44.987700 4751 generic.go:334] "Generic (PLEG): container finished" podID="0dc499e3-1ee1-422d-8adc-2a493249e84d" containerID="5e655d5977a2334fcbb549875e9c9d0817b2d40617febd343a2467bdb65be2df" exitCode=0 Jan 30 23:07:44 crc kubenswrapper[4751]: I0130 23:07:44.987771 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvjxw" event={"ID":"0dc499e3-1ee1-422d-8adc-2a493249e84d","Type":"ContainerDied","Data":"5e655d5977a2334fcbb549875e9c9d0817b2d40617febd343a2467bdb65be2df"} Jan 30 23:07:46 crc kubenswrapper[4751]: I0130 23:07:46.004660 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvjxw" event={"ID":"0dc499e3-1ee1-422d-8adc-2a493249e84d","Type":"ContainerStarted","Data":"7abd561fb603302683f5c49c12d21202e37ebf80d83390ca4c598b8d37bcc541"} Jan 30 23:07:46 crc kubenswrapper[4751]: I0130 23:07:46.043033 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xvjxw" podStartSLOduration=2.591036828 podStartE2EDuration="6.043009097s" podCreationTimestamp="2026-01-30 23:07:40 +0000 UTC" firstStartedPulling="2026-01-30 23:07:41.947786047 +0000 UTC m=+6800.693608696" lastFinishedPulling="2026-01-30 23:07:45.399758316 +0000 UTC m=+6804.145580965" observedRunningTime="2026-01-30 23:07:46.036690735 +0000 UTC m=+6804.782513384" watchObservedRunningTime="2026-01-30 23:07:46.043009097 +0000 UTC m=+6804.788831746" Jan 30 23:07:50 crc kubenswrapper[4751]: I0130 23:07:50.964170 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:50 crc kubenswrapper[4751]: I0130 23:07:50.964820 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:51 crc kubenswrapper[4751]: I0130 23:07:51.040727 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:51 crc kubenswrapper[4751]: I0130 23:07:51.125568 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:51 crc kubenswrapper[4751]: I0130 23:07:51.294445 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xvjxw"] Jan 30 23:07:53 crc kubenswrapper[4751]: I0130 23:07:53.075618 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xvjxw" podUID="0dc499e3-1ee1-422d-8adc-2a493249e84d" containerName="registry-server" containerID="cri-o://7abd561fb603302683f5c49c12d21202e37ebf80d83390ca4c598b8d37bcc541" gracePeriod=2 Jan 30 23:07:53 crc kubenswrapper[4751]: I0130 23:07:53.757206 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:53 crc kubenswrapper[4751]: I0130 23:07:53.771270 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jh6z\" (UniqueName: \"kubernetes.io/projected/0dc499e3-1ee1-422d-8adc-2a493249e84d-kube-api-access-5jh6z\") pod \"0dc499e3-1ee1-422d-8adc-2a493249e84d\" (UID: \"0dc499e3-1ee1-422d-8adc-2a493249e84d\") " Jan 30 23:07:53 crc kubenswrapper[4751]: I0130 23:07:53.771434 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc499e3-1ee1-422d-8adc-2a493249e84d-utilities\") pod \"0dc499e3-1ee1-422d-8adc-2a493249e84d\" (UID: \"0dc499e3-1ee1-422d-8adc-2a493249e84d\") " Jan 30 23:07:53 crc kubenswrapper[4751]: I0130 23:07:53.771493 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc499e3-1ee1-422d-8adc-2a493249e84d-catalog-content\") pod \"0dc499e3-1ee1-422d-8adc-2a493249e84d\" (UID: \"0dc499e3-1ee1-422d-8adc-2a493249e84d\") " Jan 30 23:07:53 crc kubenswrapper[4751]: I0130 23:07:53.773665 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dc499e3-1ee1-422d-8adc-2a493249e84d-utilities" (OuterVolumeSpecName: "utilities") pod "0dc499e3-1ee1-422d-8adc-2a493249e84d" (UID: "0dc499e3-1ee1-422d-8adc-2a493249e84d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:07:53 crc kubenswrapper[4751]: I0130 23:07:53.782526 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dc499e3-1ee1-422d-8adc-2a493249e84d-kube-api-access-5jh6z" (OuterVolumeSpecName: "kube-api-access-5jh6z") pod "0dc499e3-1ee1-422d-8adc-2a493249e84d" (UID: "0dc499e3-1ee1-422d-8adc-2a493249e84d"). InnerVolumeSpecName "kube-api-access-5jh6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:07:53 crc kubenswrapper[4751]: I0130 23:07:53.827283 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dc499e3-1ee1-422d-8adc-2a493249e84d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0dc499e3-1ee1-422d-8adc-2a493249e84d" (UID: "0dc499e3-1ee1-422d-8adc-2a493249e84d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:07:53 crc kubenswrapper[4751]: I0130 23:07:53.874807 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc499e3-1ee1-422d-8adc-2a493249e84d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:07:53 crc kubenswrapper[4751]: I0130 23:07:53.874864 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc499e3-1ee1-422d-8adc-2a493249e84d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:07:53 crc kubenswrapper[4751]: I0130 23:07:53.874881 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jh6z\" (UniqueName: \"kubernetes.io/projected/0dc499e3-1ee1-422d-8adc-2a493249e84d-kube-api-access-5jh6z\") on node \"crc\" DevicePath \"\"" Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.060167 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p2b44" podUID="310d0b6f-f293-446a-8648-ca291f0f429b" containerName="registry-server" probeResult="failure" output=< Jan 30 23:07:54 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 23:07:54 crc kubenswrapper[4751]: > Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.090750 4751 generic.go:334] "Generic (PLEG): container finished" podID="0dc499e3-1ee1-422d-8adc-2a493249e84d" containerID="7abd561fb603302683f5c49c12d21202e37ebf80d83390ca4c598b8d37bcc541" exitCode=0 Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.090794 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.090789 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvjxw" event={"ID":"0dc499e3-1ee1-422d-8adc-2a493249e84d","Type":"ContainerDied","Data":"7abd561fb603302683f5c49c12d21202e37ebf80d83390ca4c598b8d37bcc541"} Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.090939 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvjxw" event={"ID":"0dc499e3-1ee1-422d-8adc-2a493249e84d","Type":"ContainerDied","Data":"3db843790871be207e59830516f8c964af91d135af682a34952dccf6663b4ce4"} Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.090965 4751 scope.go:117] "RemoveContainer" containerID="7abd561fb603302683f5c49c12d21202e37ebf80d83390ca4c598b8d37bcc541" Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.116172 4751 scope.go:117] "RemoveContainer" containerID="5e655d5977a2334fcbb549875e9c9d0817b2d40617febd343a2467bdb65be2df" Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.116844 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xvjxw"] Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.126944 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.127002 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.127025 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xvjxw"] Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.144214 4751 scope.go:117] "RemoveContainer" containerID="4bc5d3b77a00bae9b5f8954cc101dd6528b6bc713822265185cacb964b1d06f0" Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.203804 4751 scope.go:117] "RemoveContainer" containerID="7abd561fb603302683f5c49c12d21202e37ebf80d83390ca4c598b8d37bcc541" Jan 30 23:07:54 crc kubenswrapper[4751]: E0130 23:07:54.204352 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7abd561fb603302683f5c49c12d21202e37ebf80d83390ca4c598b8d37bcc541\": container with ID starting with 7abd561fb603302683f5c49c12d21202e37ebf80d83390ca4c598b8d37bcc541 not found: ID does not exist" containerID="7abd561fb603302683f5c49c12d21202e37ebf80d83390ca4c598b8d37bcc541" Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.204401 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7abd561fb603302683f5c49c12d21202e37ebf80d83390ca4c598b8d37bcc541"} err="failed to get container status \"7abd561fb603302683f5c49c12d21202e37ebf80d83390ca4c598b8d37bcc541\": rpc error: code = NotFound desc = could not find container \"7abd561fb603302683f5c49c12d21202e37ebf80d83390ca4c598b8d37bcc541\": container with ID starting with 7abd561fb603302683f5c49c12d21202e37ebf80d83390ca4c598b8d37bcc541 not found: ID does not exist" Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.204430 4751 scope.go:117] "RemoveContainer" containerID="5e655d5977a2334fcbb549875e9c9d0817b2d40617febd343a2467bdb65be2df" Jan 30 23:07:54 crc kubenswrapper[4751]: E0130 23:07:54.204973 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e655d5977a2334fcbb549875e9c9d0817b2d40617febd343a2467bdb65be2df\": container with ID starting with 5e655d5977a2334fcbb549875e9c9d0817b2d40617febd343a2467bdb65be2df not found: ID does not exist" containerID="5e655d5977a2334fcbb549875e9c9d0817b2d40617febd343a2467bdb65be2df" Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.205007 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e655d5977a2334fcbb549875e9c9d0817b2d40617febd343a2467bdb65be2df"} err="failed to get container status \"5e655d5977a2334fcbb549875e9c9d0817b2d40617febd343a2467bdb65be2df\": rpc error: code = NotFound desc = could not find container \"5e655d5977a2334fcbb549875e9c9d0817b2d40617febd343a2467bdb65be2df\": container with ID starting with 5e655d5977a2334fcbb549875e9c9d0817b2d40617febd343a2467bdb65be2df not found: ID does not exist" Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.205030 4751 scope.go:117] "RemoveContainer" containerID="4bc5d3b77a00bae9b5f8954cc101dd6528b6bc713822265185cacb964b1d06f0" Jan 30 23:07:54 crc kubenswrapper[4751]: E0130 23:07:54.205338 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bc5d3b77a00bae9b5f8954cc101dd6528b6bc713822265185cacb964b1d06f0\": container with ID starting with 4bc5d3b77a00bae9b5f8954cc101dd6528b6bc713822265185cacb964b1d06f0 not found: ID does not exist" containerID="4bc5d3b77a00bae9b5f8954cc101dd6528b6bc713822265185cacb964b1d06f0" Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.205374 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bc5d3b77a00bae9b5f8954cc101dd6528b6bc713822265185cacb964b1d06f0"} err="failed to get container status \"4bc5d3b77a00bae9b5f8954cc101dd6528b6bc713822265185cacb964b1d06f0\": rpc error: code = NotFound desc = could not find container \"4bc5d3b77a00bae9b5f8954cc101dd6528b6bc713822265185cacb964b1d06f0\": container with ID starting with 4bc5d3b77a00bae9b5f8954cc101dd6528b6bc713822265185cacb964b1d06f0 not found: ID does not exist" Jan 30 23:07:55 crc kubenswrapper[4751]: I0130 23:07:55.994135 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dc499e3-1ee1-422d-8adc-2a493249e84d" path="/var/lib/kubelet/pods/0dc499e3-1ee1-422d-8adc-2a493249e84d/volumes" Jan 30 23:08:04 crc kubenswrapper[4751]: I0130 23:08:04.069557 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p2b44" podUID="310d0b6f-f293-446a-8648-ca291f0f429b" containerName="registry-server" probeResult="failure" output=< Jan 30 23:08:04 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 23:08:04 crc kubenswrapper[4751]: > Jan 30 23:08:14 crc kubenswrapper[4751]: I0130 23:08:14.064277 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p2b44" podUID="310d0b6f-f293-446a-8648-ca291f0f429b" containerName="registry-server" probeResult="failure" output=< Jan 30 23:08:14 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 23:08:14 crc kubenswrapper[4751]: > Jan 30 23:08:23 crc kubenswrapper[4751]: I0130 23:08:23.072068 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:08:23 crc kubenswrapper[4751]: I0130 23:08:23.138691 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:08:23 crc kubenswrapper[4751]: I0130 23:08:23.324992 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p2b44"] Jan 30 23:08:24 crc kubenswrapper[4751]: I0130 23:08:24.126789 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:08:24 crc kubenswrapper[4751]: I0130 23:08:24.127200 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:08:24 crc kubenswrapper[4751]: I0130 23:08:24.127263 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 23:08:24 crc kubenswrapper[4751]: I0130 23:08:24.129414 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c0845ebeb9e2f3643b084913909a3731e29a9707e2ccc2dbf0c44b6138b618a8"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 23:08:24 crc kubenswrapper[4751]: I0130 23:08:24.129606 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://c0845ebeb9e2f3643b084913909a3731e29a9707e2ccc2dbf0c44b6138b618a8" gracePeriod=600 Jan 30 23:08:24 crc kubenswrapper[4751]: I0130 23:08:24.448566 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="c0845ebeb9e2f3643b084913909a3731e29a9707e2ccc2dbf0c44b6138b618a8" exitCode=0 Jan 30 23:08:24 crc kubenswrapper[4751]: I0130 23:08:24.448631 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"c0845ebeb9e2f3643b084913909a3731e29a9707e2ccc2dbf0c44b6138b618a8"} Jan 30 23:08:24 crc kubenswrapper[4751]: I0130 23:08:24.448948 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:08:24 crc kubenswrapper[4751]: I0130 23:08:24.449095 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p2b44" podUID="310d0b6f-f293-446a-8648-ca291f0f429b" containerName="registry-server" containerID="cri-o://23ce04b77992080995a97acd5ff649602eda71a4f1bff530ad5c5bd20f50fcb4" gracePeriod=2 Jan 30 23:08:24 crc kubenswrapper[4751]: I0130 23:08:24.996634 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.075144 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf7bg\" (UniqueName: \"kubernetes.io/projected/310d0b6f-f293-446a-8648-ca291f0f429b-kube-api-access-pf7bg\") pod \"310d0b6f-f293-446a-8648-ca291f0f429b\" (UID: \"310d0b6f-f293-446a-8648-ca291f0f429b\") " Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.075382 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/310d0b6f-f293-446a-8648-ca291f0f429b-catalog-content\") pod \"310d0b6f-f293-446a-8648-ca291f0f429b\" (UID: \"310d0b6f-f293-446a-8648-ca291f0f429b\") " Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.075495 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/310d0b6f-f293-446a-8648-ca291f0f429b-utilities\") pod \"310d0b6f-f293-446a-8648-ca291f0f429b\" (UID: \"310d0b6f-f293-446a-8648-ca291f0f429b\") " Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.077045 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/310d0b6f-f293-446a-8648-ca291f0f429b-utilities" (OuterVolumeSpecName: "utilities") pod "310d0b6f-f293-446a-8648-ca291f0f429b" (UID: "310d0b6f-f293-446a-8648-ca291f0f429b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.080618 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/310d0b6f-f293-446a-8648-ca291f0f429b-kube-api-access-pf7bg" (OuterVolumeSpecName: "kube-api-access-pf7bg") pod "310d0b6f-f293-446a-8648-ca291f0f429b" (UID: "310d0b6f-f293-446a-8648-ca291f0f429b"). InnerVolumeSpecName "kube-api-access-pf7bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.178182 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/310d0b6f-f293-446a-8648-ca291f0f429b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.178227 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf7bg\" (UniqueName: \"kubernetes.io/projected/310d0b6f-f293-446a-8648-ca291f0f429b-kube-api-access-pf7bg\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.195983 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/310d0b6f-f293-446a-8648-ca291f0f429b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "310d0b6f-f293-446a-8648-ca291f0f429b" (UID: "310d0b6f-f293-446a-8648-ca291f0f429b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.279732 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/310d0b6f-f293-446a-8648-ca291f0f429b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.465439 4751 generic.go:334] "Generic (PLEG): container finished" podID="310d0b6f-f293-446a-8648-ca291f0f429b" containerID="23ce04b77992080995a97acd5ff649602eda71a4f1bff530ad5c5bd20f50fcb4" exitCode=0 Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.465492 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2b44" event={"ID":"310d0b6f-f293-446a-8648-ca291f0f429b","Type":"ContainerDied","Data":"23ce04b77992080995a97acd5ff649602eda71a4f1bff530ad5c5bd20f50fcb4"} Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.465541 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2b44" event={"ID":"310d0b6f-f293-446a-8648-ca291f0f429b","Type":"ContainerDied","Data":"d0b5b81d968d8999b59ba948d3f60aa48b354267086a1f491f86c20822b3a714"} Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.465561 4751 scope.go:117] "RemoveContainer" containerID="23ce04b77992080995a97acd5ff649602eda71a4f1bff530ad5c5bd20f50fcb4" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.465918 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.472550 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"fb4f57a963641dd6b90b3701ece0df7387775d4b3de3e0cbb0cc1824d9bf62d7"} Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.508209 4751 scope.go:117] "RemoveContainer" containerID="e883196a3480e1d87066cd617dc4e804d760c8a84854d7ee0d7dbbb42d1b8da7" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.522493 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p2b44"] Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.533199 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p2b44"] Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.536100 4751 scope.go:117] "RemoveContainer" containerID="0065d4cf9e477f08764ff504c4ca1f4ee5abf10fe8fe66bf90f4d02c315d7b49" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.590070 4751 scope.go:117] "RemoveContainer" containerID="23ce04b77992080995a97acd5ff649602eda71a4f1bff530ad5c5bd20f50fcb4" Jan 30 23:08:25 crc kubenswrapper[4751]: E0130 23:08:25.590858 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23ce04b77992080995a97acd5ff649602eda71a4f1bff530ad5c5bd20f50fcb4\": container with ID starting with 23ce04b77992080995a97acd5ff649602eda71a4f1bff530ad5c5bd20f50fcb4 not found: ID does not exist" containerID="23ce04b77992080995a97acd5ff649602eda71a4f1bff530ad5c5bd20f50fcb4" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.590896 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23ce04b77992080995a97acd5ff649602eda71a4f1bff530ad5c5bd20f50fcb4"} err="failed to get container status \"23ce04b77992080995a97acd5ff649602eda71a4f1bff530ad5c5bd20f50fcb4\": rpc error: code = NotFound desc = could not find container \"23ce04b77992080995a97acd5ff649602eda71a4f1bff530ad5c5bd20f50fcb4\": container with ID starting with 23ce04b77992080995a97acd5ff649602eda71a4f1bff530ad5c5bd20f50fcb4 not found: ID does not exist" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.590947 4751 scope.go:117] "RemoveContainer" containerID="e883196a3480e1d87066cd617dc4e804d760c8a84854d7ee0d7dbbb42d1b8da7" Jan 30 23:08:25 crc kubenswrapper[4751]: E0130 23:08:25.591393 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e883196a3480e1d87066cd617dc4e804d760c8a84854d7ee0d7dbbb42d1b8da7\": container with ID starting with e883196a3480e1d87066cd617dc4e804d760c8a84854d7ee0d7dbbb42d1b8da7 not found: ID does not exist" containerID="e883196a3480e1d87066cd617dc4e804d760c8a84854d7ee0d7dbbb42d1b8da7" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.591439 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e883196a3480e1d87066cd617dc4e804d760c8a84854d7ee0d7dbbb42d1b8da7"} err="failed to get container status \"e883196a3480e1d87066cd617dc4e804d760c8a84854d7ee0d7dbbb42d1b8da7\": rpc error: code = NotFound desc = could not find container \"e883196a3480e1d87066cd617dc4e804d760c8a84854d7ee0d7dbbb42d1b8da7\": container with ID starting with e883196a3480e1d87066cd617dc4e804d760c8a84854d7ee0d7dbbb42d1b8da7 not found: ID does not exist" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.591473 4751 scope.go:117] "RemoveContainer" containerID="0065d4cf9e477f08764ff504c4ca1f4ee5abf10fe8fe66bf90f4d02c315d7b49" Jan 30 23:08:25 crc kubenswrapper[4751]: E0130 23:08:25.591810 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0065d4cf9e477f08764ff504c4ca1f4ee5abf10fe8fe66bf90f4d02c315d7b49\": container with ID starting with 0065d4cf9e477f08764ff504c4ca1f4ee5abf10fe8fe66bf90f4d02c315d7b49 not found: ID does not exist" containerID="0065d4cf9e477f08764ff504c4ca1f4ee5abf10fe8fe66bf90f4d02c315d7b49" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.591835 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0065d4cf9e477f08764ff504c4ca1f4ee5abf10fe8fe66bf90f4d02c315d7b49"} err="failed to get container status \"0065d4cf9e477f08764ff504c4ca1f4ee5abf10fe8fe66bf90f4d02c315d7b49\": rpc error: code = NotFound desc = could not find container \"0065d4cf9e477f08764ff504c4ca1f4ee5abf10fe8fe66bf90f4d02c315d7b49\": container with ID starting with 0065d4cf9e477f08764ff504c4ca1f4ee5abf10fe8fe66bf90f4d02c315d7b49 not found: ID does not exist" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.990233 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="310d0b6f-f293-446a-8648-ca291f0f429b" path="/var/lib/kubelet/pods/310d0b6f-f293-446a-8648-ca291f0f429b/volumes"